• No results found

The value of virtualization in IT-processes Real, or virtual value?

N/A
N/A
Protected

Academic year: 2021

Share "The value of virtualization in IT-processes Real, or virtual value?"

Copied!
73
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Real, or virtual value?

Master Thesis Business Development

Master of Science, Business Administration

Faculty of Economics and Business

University of Groningen

Written by:

Henri Kruiper

November, 2008

(2)

Real, or virtual value?

Master Thesis Business Development

Master of Science, Business Administration

Faculty of Economics and Business

University of Groningen

Written by:

Henri Kruiper

November, 2008 Student: Henri Kruiper henrikruiper@gmail.com Student number: 1660829

First supervisor: Second reader: ir. W. Lanting dr. K.R.E. Huizingh Faculty of Economics and

Business

Faculty of Economics and Business

First supervisor: Second supervisor:

R. Moerman J. Pruissen

(3)
(4)

PREFACE

In April 2008, I’ve started my graduation research at Sogeti Nederland B.V., in order to write my Master Thesis and to complete the Master of Business Development at the University of Groningen. In appreciation of the support I have received, I would like to address a thankful word to a number of people.

Regarding my direct colleagues within Sogeti, I would like to thank my supervisor Ron Moerman. His cooperation, knowledge and support has tremendously helped me in many different ways. Further, I would like to thank my unit manager Jeroen Pruissen, for the facilitative support I have received.

Concerning the University of Groningen, I would like to thank my supervisor ir. W. Lanting for his offered help and his numerous advices/corrections in writing this Master Thesis, and dr. K.R.E. Huizingh for being the second reader and assessor. I would also like to thank Rutger Tiggeler and Niels van der Weg, for their help in preparing the research.

Last, but surely not least, I would like to thank all the organizations (customers and partners) that have participated in the empirical research.

Henri Kruiper

(5)

SUMMARY

Background

This research is set up to answer an internal question within Sogeti regarding the subject server virtualization. Sogeti is an international ICT-service provider with worldwide locations. By offering ICT-professionalism, it is focused on making ICT a common usage and Sogeti is eager to contribute to the result of their customer organizations.

Server virtualization is a software technique that exists for several years. In essence, server virtualization lets one server (a central computer) perform the jobs of multiple servers. By doing so, one server is actually behaving itself as multiple servers. As most servers normally are used for only 5 or 10 percent of their resources, it theoretically may bring several advantages. Frequently heard reasons to implement the server virtualization technology are a better utilization of the available server resources within organizations, increased flexibility, energy savings, cost savings, etc.

Research

Server virtualization is a hot topic for some years and receives increasing attention in many articles, websites and seminars. All of these sources are mainly focused at what server virtualization can bring to organizations. Sogeti is one of ICT-service providers that offers the implementation of server virtualization as a service to customers. The internal question that Sogeti had: which results have companies obtained by applying the server virtualization concept? Is the (product) concept of server virtualization really bringing (all of the) advantages? In essence: the goal of this research was to determine the value of the (product) concept of server virtualization (for organizations).

The central question that has guided this research project is: “Does the concept of server virtualization meet the promises (of ICT-suppliers) and expectations (of organizations), and which actions can ICT-suppliers and organizations take in order to maximize the value of server virtualization?”

Research questions

Answering the central question is done by theoretical and empirical research. Empirical research has been carried out with customers (organizations that have implemented server virtualization) and partners (server virtualization software vendors). Four research questions have been set up to answer the central question.

1. What are the relevant variables for researching the development of server virtualization?

2. Which changes in these variables have customers experienced since the implementation of server virtualization?

(6)

Results

With the available literature, a list of 89 variables has been set up. This list is then classified into six so-called quality attributes, derived from a theoretical IT-infrastructure methodology. These quality attributes have guided the rest of the research.

Empirical research under customers has revealed a wide range of results. Primary objectives that organizations have when implementing server virtualization are: increase of flexibility, improve hardware utilization, reduce costs and create a manageable IT-infrastructure. Secondary objectives are reducing energy consumption, create or improve fail-over possibilities, prevent extensive investments, achieve standardization and being/staying innovative.

Risks that organizations perceived pre-implementation are: novelty of the technology (i.e. not mature enough yet), increased risk of hardware failures, non-functioning applications and the loss of performance.

With server virtualization, organizations have found themselves being more flexible - in terms of the scalability of the IT-infrastructure. The proportions between virtualized and non-virtualized servers are varying widely, with an average of 60% to be virtualized. Not every participator is anticipating on a 100% virtual environment. Most of the organizations now perceive to be better able in supporting business processes more efficiently. Organizations with external datacenters also mentioned (large) space and thus cost savings, those with internal datacenters are less optimistic about these savings. In many instances, organizations find more or less the same level of complexity in the IT-architecture: while the logical part of the complexity has increased, the technical part has decreased.

Every participator mentioned to be more flexible concerning the adaptability of the IT-infrastructure. The significantly decreased lead time to deploy new servers is one of the success factors here. However, although a virtual server is created in only a few hours, organizations still need to take time into account for the administrative process around this deployment process. For most participators, server virtualization inhibited the growth of IT-staff; however, this staff does need to have more knowledge and skills than before.

Server virtualization has had little impact on the security of the IT-infrastructure. Despite some issues, none of the participators noticed significant changes. The only aspect in this part that played a role, is patchmanagement (applying patches/updates in the IT-infrastructure), which has extended by server virtualization. Concerning backup capabilities, participators had noticed little influence: some mention that making backups has simplified, others find it to be more difficult.

(7)

Concerning the financial manageability of the IT-infrastructure, some points of interest have turned up. The role of ICT-budgets has changed: investments change from relatively continuously (throughout a year) to a few peak expenditures. However, at least half of the participators did not find any change in the height of budgets. Many participators believe to have lower maintenance costs, personnel costs did not rise and all organizations have or believe to have a lower Total Cost of Ownership.

Research at the accountability of the IT-infrastructure has also been done. Looking, at Return On Investment (ROI), those that strive towards a ROI also achieved or will achieve this. Concerning the (perceived risks), all participators (except one) find the risks to be lower now. The risk of hardware failure has changed significantly, as organizations can better anticipate on it. The effect of server virtualization on the assignment of costs (making chargeback’s) to internal customers is varying, some find it difficult while others just find it more transparent. Many participators believe they have lowered hardware costs, but have in many instances no figures available. All participators find the software costs to be increased. Electricity savings is somewhat vague, only one organization has figures about this while others only have estimates/guesses or no clue at all.

Staying up to date is a key success factor: after the implementation, the project should never be regarded as completed. Organizations also should be aware of not only changing IT-components, but also the administrative and management processes around it. A good project leader and high involvement of the organization also turned out to be critical aspects.

All organizations mentioned to have achieved their specific goals. One organization is still implementing, but has confidence in the result and expect to also achieve the targets.

Future developments

======================== CENSORED ======================= Partners have supplied information about their future developments, in order to compare their vision with customer visions.

Due to the fact that great parts of these future developments are not (yet) officially published by partners themselves, this part of the thesis has been deleted. The information that is supplied for this part of the research is confidential and cannot be published via this thesis.

======================== CENSORED ======================= Conclusion

(8)

TABLE OF CONTENTS

Preface ... 4

Summary ... 5

Table of contents ... 8

Chapter 1. Sogeti and her enviroment ... 10

1.1 History ...10

1.2 Mission & Vision ...10

1.3 Activities ...11

1.4 Customers ...11

1.5 Strategic alliances/partners ...11

Chapter 2. Server Virtualization ... 12

2.1 The concept of server virtualization ...12

2.2 The technique of server virtualization ...13

2.3 ICT-problems and server virtualization as solution ...15

Chapter 3. Research design ... 18

3.1 Scope ...18

3.2 Research objective ...18

3.3 Central question & main questions ...19

3.4 Data sources ...20

3.5 Measuring methods ...21

3.6 Research process scheme ...22

3.7 Conceptual model ...23

3.8 Theoretical concepts ...24

3.9 Analysis methods ...28

3.10 Preconditions ...28

3.11 Relevance/contribution to literature ...29

Chapter 4. Research variables ... 30

4.1 Background ...30

4.2 Variables ...30

4.3 InFraMe & the utility principles ...31

4.4 Variables & Utility principles combined ...32

(9)

Chapter 5. Empirical Results – History ... 34

5.1 Objectives, IT-Problems & Expectations pre-implementation ...34

5.2 Perceived risks pre-implementation ...36

5.3 Macro-economical factors ...37

5.4 Flexible – Scalable ...37

5.5 Flexible – Adaptable ...41

5.6 Reliable – Secure ...45

5.7 Reliable – Available ...47

5.8 Cost efficient – Manageable ...50

5.9 Cost efficient - Accountable ...52

5.10 Other, not yet mentioned aspects ...55

5.11 Server virtualization targets achieved? ...57

Chapter 6. Empirical Results – Future ... 58

Chapter 7. The Development of Virtualization ... 67

7.1 Short history of server virtualization ...67

7.2 Current state of server virtualization ...68

7.3 Cooperation between hypervisor and other parties ...71

Conclusions ... 72

The value of server virtualization ...72

SWOT-analysis ...73

Limitations ... 75

References/Literature ... 76

Appendix 1: Research variables ... 79

Appendix 2: Questionnaire ... 82

Appendix 3: Participating Organizations ... 84

Customers ...84

(10)

CHAPTER 1. SOGETI AND HER ENVIROMENT

The research that is performed for writing this thesis, is initiated and facilitated by Sogeti. In this chapter, a brief look at Sogeti and her environment will be provided. 1.1 History

The company history of Sogeti goes back to October, 1967 (Sogeti Wiki, 2008). In France, Serge Kampf lays the foundation of a 'Business Management and Information Processing Company'. Translated to French, the full name of the company is 'Société de Gestion des Entreprises et de Traitement de l'Information', abbreviated to Sogeti. In 1973, Sogeti takes a majority interest in the European Company CAP. In 1974, Sogeti takes over the American company Gemini Computer Systems. One year later, the three companies are merged to CAP Gemini Sogeti. In 1996, the name is simplified to Capgemini. The brand name ‘Sogeti’ returns in 2002, when Capgemini decided to put Sogeti back to market, as a sister company of Capgemini. The holding company of all Sogeti-companies is Capgemini S.A. (annual report, 2006).

In the Netherlands, Sogeti Nederland B.V. is founded at August the first, 2002. A merger between the companies IQUIP Informatica B.V. (IQUIP), Gimbrère en Dohmen Software B.V. (G&D) en Twinac Software B.V. (Twinsoft) resulted in the foundation of the Dutch form of Sogeti. Two years later, Sogeti merged with the Dutch company Transiciel.

In the Netherlands, Sogeti has office locations in Vianen (head office), Rotterdam, ‘s-Hertogenbosch, Diemen, Groningen and Amersfoort.

With locations in Europe (e.g. in France, The Netherlands, Denmark, Germany) and the United States of America, Sogeti is operating worldwide. With the foundation of Sogeti India, Sogeti has made a move to an upcoming trend of offshore activities.

1.2 Mission & Vision

Sogeti has translated her vision and vision statement in a set of eight mission statements. These statements clarify who Sogeti is and where it stands for. These statements are:

1. Sogeti helps organizations with the realization, implementation, testing and managing of the valuable ICT-solutions.

2. Sogeti inspires her customers about the possibilities of information and communication technology.

3. Sogeti puts the customer in a central role and distinguishes itself by listening carefully and acting quickly.

4. Sogeti obliges herself to the result on the basis of her excellent professionalism and her entrepreneurship.

5. Sogeti guarantees employees a maximum return on their contributed intellectual capital.

(11)

7. Sogeti builds a network of strategic alliances with partners that each offer the best solutions in their workspace.

8. Sogeti strives to outperform the expectations of customers and employees continuously.

With ICT-professionalism, Sogeti is eager to contribute at simplicity, reliability, availability and efficiency to make ICT a common usage. In this way, ICT becomes a service which organizations use in their business. Therefore, the vision of Sogeti is: Result by impassioned ICT-professionalism.

1.3 Activities

As a 'Business Management and Information Processing Company', Sogeti has a diverse set of business activities that are performed within the Dutch ICT-market. In order to structure these activities, Sogeti has set up eight Business Units (BU). Within each BU, multiple activities are performed to fulfill the overall service concept of Sogeti. The research within this Master thesis has been performed within the BU of Infrastructure Services (IS).

As a broad service provider, Sogeti has developed and published multiple methods for performing IT-activities. Some of them are: Test Management Approach (TMap), DYA architecture methodology, InFraMe (infrastructure project approach) and Regatta (implementation approach). These methods are market wide acknowledged and in some cases are used as a market standard.

1.4 Customers

Sogeti is mainly doing business with top-500 companies. This automatically implies that it is deliberately focusing on large-scale organizations, small (entrepreneurial) organizations are not part of their target group. Sogeti is not restricting her business to specific markets. However, customers can be found in ‘four’ branches: Financial services; Government; Telecom/media; Trade, transport and other businesses.

A few examples of customers are KLM, Vodafone, Shell, ASML, Stater, Rabobank, etc. 1.5 Strategic alliances/partners

(12)

CHAPTER 2. SERVER VIRTUALIZATION

Within the BU of Infrastructure Services, an internal question eventually led to this research project. This question concerned the software technique of server virtualization. Before proceeding to the actual research (e.g., in terms of the actual question, research design, etc.), it is useful to explore the concept of server virtualization in advance. To fully understand this area of interest, several subtopics will be answered in the next few paragraphs.

2.1 The concept of server virtualization

First, it will be explained what server virtualization actually is.

In essence, server virtualization lets one computer perform the jobs of multiple computers. This is done by spreading the resources of one computer across multiple jobs. The unabbreviated name server virtualization gives already a hint: it is mainly used for servers and server capacity. (A server is a central computer in a business environment, equipped with an operating system at which a business application can be installed, enabling company personnel to use this application.)

Many businesses use several applications, whereby the number of applications may vary from only a few to hundreds or even thousands applications. Examples of applications are: accounting software, e-mail applications, database software, etc. In many situations, a single server is only used for only one application, thus creating a 1-on-1 relationship. With server virtualization, that relationship changes. As server virtualization enables a server to perform multiple jobs simultaneously, one server is actually housing multiple applications at once. Basically, the relationship changes to a n-on-1, whereby n stands for the number of applications that is running on 1 server. For example, if an organization uses 100 applications and uses server virtualization, it could need only 20 servers instead of 100, thereby creating a 5-on-1 relation.

Why should system administrators not install multiple applications on one server directly, thus without server virtualization? In many situations, this is impossible due to a number of reasons. For instance, the application demands an own server and an own operating system and is conflicting when a second application is installed within that same server. Another example is the operating system, as not every application is able to run on the same operating system as other applications.

(13)

Applications can be installed in those virtual machines, every application within its own virtual machine. The server virtualization layer manages and regulates all interactions between the physical hardware and virtual machines (and thus the regular applications) that are in use on top of that.

Server virtualization makes sure that applications, which normally interact with the physical hardware directly, now interact with the server virtualization software. Thus, the server virtualization software is a linking pin between the physical hardware and the applications. The server virtualization layer simulates the physical hardware, letting the applications think they are installed directly and alone at the hardware.

Having explained what server virtualization is and how it works, the question remains: why would organizations deploy server virtualization? The advantages of server virtualization vary widely, but the main reason to deploy server virtualization, is to improve the utilization of hardware (Golden, 2008). As hardware is frequently utilized less than 10%, it becomes interesting to install multiple applications at one server. However, many applications demand an own server and will not work if they share resources and are aware of this. This reason and a number of other reasons to deploy server virtualization will be given in paragraph 2.3.

To conclude this paragraph, a definition of server virtualization will be given. As there are many different definitions of various authors, a central definition is deduced, covering the essence of server virtualization.

Server virtualization is a software concept by which one physical server can operate and behave like multiple servers. With this technique, the possibility is created to run multiple applications at the same time at one server, without the consciousness of those applications. Server hardware/resources can be bundled in this way, resulting in a more efficient use of resources, improved flexibility and cost savings.

2.2 The technique of server virtualization

Throughout this thesis, the distinction between a physical and virtual server will be quoted many times. Therefore, these two concepts will be explained.

Physical server

(14)

Exhibit 2-1: Construction of a regular, physical server

Virtual server

Basically, server virtualization works as a ‘layer’ between the physical server hardware (grey box) and the installed operating system (blue box). This layer is created by server virtualization software, which is called the “hypervisor” (Golden, 2008). The hypervisor can be installed directly on the physical server hardware. The hypervisor and the physical hardware jointly form the so-called host.

As soon as the host is created, the hypervisor takes control of the physical server hardware and is able to split it into multiple sections. Thereby, it actually simulates the (heavyweight) physical server hardware as being multiple (low weight) servers, all of these being independent of each other. Thus, a virtual server is an imitation of a physical server. However, instead of using physical hardware, it uses simulated (virtual) hardware, hence the name ‘virtual server’ or ‘virtual machine’.

When the virtual machines are created, the operating systems (which normally are distributed over multiple physical servers) can be installed within the virtual machines. Continuing, on top of those operating systems, the applications can be installed.

Exhibit 2-2 shows the construction of a host with three virtual machines. The number of virtual machines is not limited to three (as in Exhibit 2-2), but could be a number up to 40 or even higher.)

Hypervisor Physical hardware

CPU Memory Harddisk Network

Virtual Machine Operating System Appl. Z Simulated Hardware Virtual Machine Virtual Machine Operating System Appl. Y Simulated Hardware Operating System Appl. X Simulated Hardware

Exhibit 2-2: Construction of a host with three virtual machines

(15)

2.3 ICT-problems and server virtualization as solution

According to numerous written sources, server virtualization can be a solution for a wide variety of problems or inefficient situations. The most important reasons to implement server virtualization will be discussed here.

• Hardware inefficiency/Resources wasting: As is discussed in the previous paragraphs, in many situations only one application is installed per server (Golden, 2008). However, current hardware capacity has developed to a level way beyond what often is necessary for an application. Installing a relative light application on a relative heavy server, results in a low utilization rate of the hardware (or, in other words, hardware inefficiency).

Generalized, only 15-20% of the available hardware capacity is used in a data center. Looking at the level of single servers: some of the applications use only 5% of the hardware capacity, resulting in a 95% ‘empty’ and idle server. This results in a high redundancy of hardware, as sometimes is indicated as resources wasting (Virtualization.info, 2008).

Server virtualization makes it possible to ‘consolidate’ these servers. Golden (2008) defines consolidation as the movement of several physical servers to a single host that supports VM’s. By doing so, the number of physical servers can be reduced, where percentages run up to even 80% (thus reducing more than three-quarter of the hardware).

• Hardware availability: Although there is often a single server available for each application, in most cases there are no servers available for testing purposes (Virtualization.info, 2008). For example, it is desirable to test new updates for an application or operating system at a specific test server and not at the normal server that is in use. If organizations wish to set up such a test facility, they need to purchase extra and separate hardware, which would further increase the hardware environment and the hardware inefficiency.

With server virtualization, a VM can be created for testing purposes only. By doing so, no additional hardware needs to be purchased, as the test server is ‘just’ an extra VM on the existing hardware.

• Capacity problems datacenters: Caused by the explosive growth of servers, the datacenters where these servers are housed are running out of space. In large organizations, some even speak about real estate problems when talking about their full data centers. In the datacenters in Amsterdam, organizations are almost pushing for a place for their servers (Toet, 2007) and real estate costs start to become a problem (Golden, 2008).

(16)

• Rising energy costs: The explosive growth of servers also has led to a significant rise in energy costs (Golden, 2008). Next to the fact that every server needs electricity to operate, datacenters also needs to be cooled with air-conditioning (more servers installed, more heat produced, more cooling capacity required). Between 2000 and 2006, the energy consumption of the American datacenter has doubled (!) and in 2011, it is expected to double again (Siedzik, 2008). By the rise in energy consumption, energy scarcity comes into play resulting in increases in energy prices. The current energy debates, the trends of CO2-emission reduction and “Green IT” make that energy reduction is more and more desirable.

The consolidation of servers also means that less energy is used, as less servers need to be powered and less heat is produced (reducing the need to cool the data center).

• Rising system administration costs: “Computers do not operate on their own” (Golden, 2008). Every server needs attention of a system administrator to stay ‘on air’, for example to monitor the actual status, the installation of updates, replacing defect hardware, etc.

Again, consolidating servers means a reduction of workload of system administrators, resulting in possible savings of personnel costs.

• Increase flexibility/Time availability: In the current world of 2008, change is prevailing on an (almost) daily basis. Business processes change more quickly and more often than in the past and the IT-environment needs to be adapted equally. Server virtualization enables a higher flexibility (VMware, 2008). Creating a new server for a new or changed business process is done in short period. There is no signature necessary for the purchase of hardware, there is no need to install and configure new hardware, etc. A VM is created and used in a fraction of the necessary time with physical servers (Virtualization.info, 2008).

• Downtime: Hard- and software can be confronted with defects or failures (Virtualization.info, 2008). If this happens to physical servers, in many instances this lead to a temporary inaccessible server and application (downtime), resulting in a loss of time and money in operational business processes.

Server virtualization can shorten the downtime or even neutralize. A so-called ‘snapshot’ can be created of the VM in-use: an exact copy of the server as it is running at a specific moment in time. In case of a software failure, the hypervisor can start a new VM automatically, based on that snapshot. In a few seconds, the stuck server is running again, without disturbing the users (too) much. According to VOD (2008), functionality like ‘fault isolation’ makes sure that other VM’s are not disturbed when one VM has difficulties.

(17)
(18)

CHAPTER 3. RESEARCH DESIGN

In order to structure the research design, the model of ‘de ballentent’ (de Leeuw, 1996) will be used. Before starting with this design (in paragraph 3.2), the scope of this research project will be examined.

3.1 Scope

The software technique virtualization contains a rather broad application. Virtualization has grown into a concept that is applied in many ICT-fields. However, to ensure the execution of a feasible research project within the boundaries of the Master’s thesis, it is necessary to reduce the work field to a smaller research area. Therefore, the scope of this research project will be directed to server virtualization. The choice of this type of virtualization is based on the fact that server virtualization is the oldest type of virtualization and is therefore a widespread type of virtualization. This makes it plausible that this research area could provide more information or research data than the other types of virtualization. Other types of virtualization, like application or desktop virtualization, are excluded from this research.

3.2 Research objective

Several years ago, Sogeti started offering services in the server virtualization domain. Sogeti is one of the companies in the Dutch ICT-market that, in cooperation with their partners and alliances, implements server virtualization solutions as part of their service delivery.

At present, many organizations use the server virtualization solutions for different purposes. However, which results have companies obtained by applying the server virtualization concept? Which (side) effects are the result of server virtualization (e.g., environment, flexibility, etc.)? Is it fulfilling the needs and expectations of organizations?

(19)

3.3 Central question & main questions

The central question that guides this research project is:

[4 vragen]

In order to answer the central question, four main questions have been formulated. 1. What are the relevant variables for researching the development of server

virtualization?

• By determining which variables play a role within the server virtualization concept, this question sets a significant base for the further research. With these variables, the following questions are guided and can be answered more constructively.

2. Which changes in these variables have customers experienced since the implementation of server virtualization?

• This question examines the results that customers have obtained with the application of server virtualization and to which (side) effects this application has led. This will be based on the previous determined variables. Furthermore, it is relevant to determine the setting of the customer: for example, size of organization, industry, specific applied solutions, etc.

3. What are the expectations of customers and partners for the near future in the field of server virtualization, and to what extent do these expectations match?

• What possibilities do customers like to see that server virtualization enables in the future? What do partners expect to be likely happening in the field of server virtualization? In the case that there are gaps between these two visions, this would mean that ICT-companies should develop their offerings into more effective products/services, or that the field of server virtualization will bring more possibilities than customers expect.

4. Which aspects are relevant to maximize the value of server virtualization?

• Based on the research results, this question will lead to a final conclusion/sum-up about the value of server virtualization: what are the relevant aspects that maximize the value of server virtualization in the ICT-processes of organizations?

The discussion of these questions will be done in the following chapters. To be precisely: • Question 1: Chapter 4 - Research variables (page 30);

• Question 2: Chapter 5 - Empirical RESULTS – History (page 34); • Question 3: Chapter 6 - Empirical Results – Future (page 58); • Question 4: Conclusions (page 72).

(20)

Please note:

Customers: In this thesis, the word customer(s) is aimed at organizations that have implemented server virtualization solutions. Because this research is market wide, these organizations do not necessarily have a commercial relationship with Sogeti; instead, this can also be a personal relationship (without any form of commerce) or perhaps no relationship at all. However, these organizations are at any time users of server virtualization solutions and are therefore marked as customers.

Partners: In this thesis, the word partner(s) is aimed at organizations that are delivering the technology (software and/or hardware) that facilitates the server virtualization solutions: these are the suppliers of ‘the products’. Sogeti or other ICT-service providers implement these products at customer sites: the suppliers of the ‘service’. Partner organizations are regarded as expert opinions, as they have significant knowledge in their field of interest.

3.4 Data sources

The necessary information to answer the research questions will be derived from multiple sources. Even though some of the data sources are already implicitly mentioned, all of them will be listed here. There is a distinction between literature and empirical research.

• Literature:

o (Professional) literature

The first data source that will be used is (professional) literature. Books, articles and presentations of the master of Business Development will be a primary source of information. Because the subject of this thesis is quite specific, IT-literature will play a significant role as well.

o (Internal) documentation

Within Sogeti, there may be documentation available that could be relevant, for example information about customers, reports of seminars, etc. While this would not be a primary source to base the research on, it still could provide some relevant information.

• Empirical research: o Customers

Because the topic of this research is aimed at value for customers, the customers self are the main source of information. As mentioned earlier, this is not limited to commercial customers only.

o Internal expertise (Sogeti)

(21)

o External expertise (partners)

Sogeti maintains several strategic alliances and has relationships with a diverse set of partners. Some of these parties have considerable experience or knowledge about server virtualization, for example by being the supplier of the hard- and software of the server virtualization concept. This external expertise is considered as a data source.

3.5 Measuring methods

Having considered the data sources, it would be relevant to determine how information is derived from these data sources. This is done by combining the research questions (as numbered below), data sources and measuring methods at once.

1. Relevant research variables: the research variables will be determined by scrutinizing the relevant (IT-)literature. Which aspects are frequently mentioned in the literature and which not?

2. Changes in variables: the actual field research will be executed by interviewing customers who (recently) ‘are virtualized’. Interviews/case studies are the appropriate methods to gather information, as the nature of this research is mainly qualitative. Possibly, for some variables quantitative techniques (such as statistics) could be applied here. Interviews will be executed in a semi structured way, which means that the interview is guided by specific questions but still has the possibility to probe additional questions or to follow incoming thoughts (Cooper and Schindler, 2006).

3. Future expectations: in line with the previous research question, interviews will be the most important measuring tool. As this question is probing for visions about the future, interviews with internal and external experts will be executed here. Also, customers will be asked about their vision in relation to the future.

(22)

3.6 Research process scheme

At the next page, in Exhibit 3-1, a research process scheme can be found. This shows the different steps in chronological order and how they interrelate to each other.

Preparation pre-research (research design) Question 1: Determine research variables Question 2: Empirical research (past) Question 3: Empirical research (future) Question 4: Aspects for value maximization Write Master Thesis Literature Customers Experts Publications Research objective

Exhibit 3-1: Research Process Scheme

Before starting the actual research, various preparation tasks will be executed. These include writing a research plan, generating research questions, etc.

After this preparation, the previous mentioned research questions will be answered. Because the questions are all more or less chronologically following each other, they will be handled in a serial way. As is mentioned before, customers will participate in question two and three; experts will only participate in question three.

(23)

Time

Technical environment Past Future

“CURRENT” SERVER VIRTUALIZATION

GENERAL TECHNICAL DEVELOPMENTS

“IMPROVED” SERVER VIRTUALIZATION

RESEARCH 1. Evaluate achieved value of server virtualization (the past) 2. Determine actions/ developments for maximization value of server virtualization (the future) Theoretical concepts Literature Empirical research 3.7 Conceptual model

To recite the discussed twofold research objective, research methods, data sources and the theoretical concepts, the following conceptual model has been created. It shows the relationships between these different research key components.

The circle containing “Research” is placed in the middle of the conceptual model, as it obviously plays the very central role in this model. As is mentioned in paragraph 3.4, the two main data sources feeding this research are literature and empirical research.

Left and right of the research circle, the two research objectives can be found (in grey tinted rectangles). First and left, the evaluation of the yet achieved value is displayed. It is placed at the arrow pointing at “Current server virtualization”, as this objective is focused at what yet is achieved with the current available server virtualization technology. To support this objective, theoretical concepts (see paragraph 3.7) will be used.

(24)

To clarify the distinction between the past and the future, a timeline has been added to the top of the model.

Server virtualization (both current and improved) influences and gets influenced by “General Technical Developments”, originating from the Technical environment. General technical developments (e.g. the development of computer hardware) could open up new possibilities in the development of server virtualization. Also in a reversed way: the development of server virtualization might open new possibilities for general technical developments.

3.8 Theoretical concepts

Within the server virtualization concept, multiple theoretical concepts of the master of Business Development can help the value determination. These concepts offer several grips to describe the yet achieved value. The concepts will be explored here.

Business Development – The process from idea to launch

In the product development literature, several methods exist to guide organizations in the process of developing new products and services. One of the well-known methods is the Stage-Gate model of Cooper (1990), which defines five stages and five gates in the process from idea to launch. In Exhibit 3-2, the original model is displayed.

Exhibit 3-2: Stage-Gate model (Cooper, 1990)

(25)

The last mentioned market information is what this research is about. It is a review of what the product of server virtualization yet offers to organizations. Eventually, depending on the exact results, this research could be input for the development process to further develop the server virtualization technique.

The Value Cycle - Customer value

Within every business, value is traded between an organization and its customers. As Streefland (2007) mentioned: “The essence of business consists of matching product and customer”. Offering the right product that meets the customer’s needs, leads to satisfied customers that are willing to pay for it. In the end, both parties trade value, hence the existence of the value cycle, which is depicted in Exhibit 3-3.

Exhibit 3-3: The Value Cycle

This research is mainly focused at the upper half of the value cycle (as presented in Exhibit 3-3): the so-called customer value. Customers see value when they are offered services that complement their own value systems on physical, intellectual and emotional planes e.g. quality, value for money and style (Zoethout, 2007). The lower half of the cycle will also come into play, but to a less extent (only when discussing some financial parts of an IT-infrastructure).

Organizations buy server virtualization technologies to satisfy particular needs (see next theoretical concept), and thus to receive value from that technology. The last mentioned argument is the primary research interest.

Expressed Needs and Latent Needs

Narver, Slater and MacLachlan (2004) make a distinction between responsive market orientation, which addresses a firms orientation to satisfy expressed needs, and the proactive market orientation, wherein a firms orientation is aimed at satisfying latent needs. The distinction between expressed and latent needs is useful here: expressed needs are needs that customers are fully aware of, these will create or harm the realized customer value (depending on how much the needs are satisfied by a company). Contrary, latent needs are needs that customers are unaware of, but do create value for them. Satisfying these needs means for companies a thorough understanding of the customer’s situation and ‘problems’ for which customers seek solutions.

Products/services

Organization Customer

(26)

An example will demonstrate this distinction. If a customer intends to buy a television, it might look for a model that has a certain type of screen and several methods to connect peripheral equipment (altogether: expressed needs). A certain television model complies to these needs and is purchased. During installation, the user discovers that the television is able to automatically set up the channels. Within a few seconds, the television has been installed and the user is ready to watch. Although he didn’t ask for the automatic set up feature, he is pleasantly surprised that it exists and highly values it (latent need).

By satisfying customer needs, a company creates customer value. As this is the primary research interest, it is interesting to determine if the concept of server virtualization meets expressed and/or latent needs.

Technology Life Cycle

The Technology Life Cycle (TLC) is a derivation of the more known Product Life Cycle (PLC) (Ford and Ryan, 1981). The TLC traces the evolution of a technology from the idea phase through other phases, such as ‘Application Growth’. More precisely, the TLC consists out of six phases:

1. Internal Analysis: A company develops or uncovers a potentially profitable technology;

2. Technology Application: The technology is demonstrated and evaluated in terms of future product sales, potential license revenues and perhaps turnkey deals; 3. Application Launch: A company is likely to be developing its technology further

(by modification or application to different or perhaps wider applications);

4. Application Growth: Rewards of increasing product sales are occurring and the application of the technology is spreading;

5. Technology Maturity: By the time a technology reaches the phase of maturity, it will have been modified and improved, not only by the originating company but also by competitors who have adopted the technology. Cutting costs in order to keep production profitable is seen as a crucial step (Harvey, 1984);

6. Degraded Technology: The technology enters the final phase when it has reached the point of virtually universal exploitation. The technology has been passed by new technologies that make the originating technology obsolete.

(27)

High

Low

Phase 1 Phase 2 Phase 3 Phase 4 Phase 5 Phase 6

P e n e tr a ti o n o f te c h n o lg y Stages of development Exhibit 3-4: Technology Life Cycle

By using the TLC, it becomes possible to ‘classify’ the actual stage in life of a certain technology. By determining its position in the TLC, is becomes clear how far the technology is evolved. This, in turn, can be input for several decisions, such as deciding whether this technology would need further investments (to stimulate growth), whether it would be useful to pay more attention to upcoming technologies (in case of an ending life cycle), etc.

Dominant design

In many new product categories, the market accepts a particular product’s design architecture as one that defines the specifications for the entire product category (Srinivasan, Lilien and Rangaswamy, 2006). This design is a so-called Dominant Design. Examples of dominant designs are the DVD (standard) and the x86 computer architecture.

The emergence of a dominant design can have significant influences in a market. The company who invented the dominant design enjoys several advantages, such as licensing technology or having the dominant design name related to the name of the company. However, equally or perhaps more important: a dominant design sets the base for future product developments and thus market competition. A dominant design may lead to a product platform, resulting in many enhancements, hybrids or complementing products. The emergence of such a design can also have important influences in buyers behavior. A new product category with different types of designs makes a market diverse, letting customers delay their purchase of the product awaiting a dominant design.

(28)

3.9 Analysis methods

The data collection methods are primarily qualitative of nature. Interviews and the use of literature will eventually result in a fair amount of qualitative, hand written data. To analyze this data, the essence of every answer shall be ‘summarized’ into short statements, which then can be compared to the other provided answers. Cooper and Schindler (2006) mention this as ‘qualification’, in which data is qualified into categories.

It is not planned to use statistical analysis methods. However, if numerical data (for example, ratio data) is provided by participators, statistical analyses will come into play. For example: information about cost savings might enable this kind of analysis. The actual research data will be explored eventually, to determine the possibilities.

As a final analysis method, the SWOT-analysis will be used to sum up the value of server virtualization.

3.10 Preconditions

The preconditions that influence this research project, are classified in two categories. Content:

• As is mentioned in the scope (paragraph 3.1), this research project is only focused on server virtualization; other types of server virtualization (e.g. desktop or application virtualization) will not be part of the research;

• There are two target groups that have interest in the results of the research. First, the university of Groningen, which will have an academic interest in the research as it serves as graduation subject. The results will also contribute to the academic literature, which will be discussed in paragraph 3.10. Second, IT-practitioners will have a practice interest in the results. The results should provide them a better understanding of the field of server virtualization.

Process:

• The research project should be completed in five to six months. Taking into consideration a delay due to summer holiday, the research project will likely have a time span of six months. The formal start date is April, 14th 2008; the strife end date is October, 13th 2008;

(29)

3.11 Relevance/contribution to literature

The relevance of this research project manifests itself in the level of aggregation. In the IT-literature, lots of articles have been written about server virtualization. However, these articles almost always focus on one of two points:

• What organizations could achieve with the (total) concept of server virtualization; • Which single aspects of IT-processes improve by implementing server

virtualization.

Yet, there is no explorative research available which reveals – from a higher level of aggregation – to what results the implementation of server virtualization will lead. What do organizations really achieve and which pitfalls are experienced but are not yet common sense? Therefore, it complements the total body of research and written publications about server virtualization.

(30)

CHAPTER 4. RESEARCH VARIABLES

The first research question that guides this research project is focused on the determination of the research variables. To in-depth study the field of server virtualization, the (most) relevant research variables must be selected. With which indicators can the development of server virtualization be assessed? Which variables are relevant to test in practice?

4.1 Background

An extensive body of literature has been scrutinized, which resulted in the explanation of “What is server virtualization” in chapter 2. Within that chapter, the necessary basic background about server virtualization has been set up. Next to this background, the same literature research has equally revealed the ‘hot topics’ in the field of server virtualization. These are subjects that frequently (or not!) have been discussed within the literature. Evidently, these are subjects that ‘matter’ in the field of server virtualization and are therefore interesting items.

4.2 Variables

Cooper and Schindler (2006) have published the following definition for research variables: “the term variable is used as a synonym for construct or the property being studied”. Having said that, a variable can be viewed as a sub-subject that will be researched.

Knowing the (in the literature) much discussed subjects, it is relatively straightforward to use these subjects to determine the research variables. Although the aggregation level of the subjects varies, they still can be converted to feasible variables.

The following is an example of this conversion. Connor and Muller (2007) speak about the need for tools to enable chargeback calculations (which are the calculations to assign IT-costs to specific departments). Brooks (2007) complements this argument, by stating that ‘going virtual’ would require more attention devoted to figuring out ‘who is going to pay’. These authors imply that by applying the server virtualization concept, the assignment of costs to departments will get blurred. Having said that, a research variable comes into being: the blurring of IT cost assignments/chargeback’s in an organization. Then, this research variable can be used to collect practical information about it. Do organizations really notice (more) difficulties of charging departments, after implementing the server virtualization solution? Eventually, it should give the opportunity to make a final statement about the influence of server virtualization to cost chargeback’s.

(31)

Eventually, this led to a list of 89 variables. The target of carrying out a broad research will be more or less fulfilled with this number. However, at the same time it also requires a method or theory to structure these variables. Structuring means classifying the variables into categories and creating hierarchy between variables, as some variables are part of other higher variables, hence the previous mentioned aggregation level.

The first step to structure the variables is by grouping them into broad categories. This can be done in categories like, for example, Financial, Technical, Personnel, etc. This leads to a more-organized list of variables, however, it would still be pleased with a formal structure or IT-theory.

4.3 InFraMe & the utility principles

The necessary structure is found in (a part of) the InFraMe methodology (Moerman, 2006). This methodology, founded by the Technology Officer of Sogeti, is focused at guiding IT-infrastructure projects. It has the intention to enable a structured way of working and to achieve predictable outcomes for IT-infrastructure projects. InFraMe is written for those who manage or perform these projects and is widely adopted in the IT-industry.

Without discussing InFraMe completely, a small part of the methodology will be explained here. Within InFraMe, three generic demands for an IT-infrastructure are defined: it needs to be flexible, reliable and cost efficient. Each of these three demands can be translated into two quality attributes. All together, this leads to six quality attributes, which are called utility principles. Specifically, these principles are (Moerman, 2006):

• Flexible:

o Scalable: Sufficient flexibility to satisfy varying demand from the organization;

o Adaptable: Sufficient flexibility to be easily adapted in case of changing needs of the organization (internal), or a technology push (external). • Reliable:

o Secure: Sufficient security to ensure the requirements regarding integrity and confidentiality;

o Available: Sufficient availability to meet the demands of the organization. • Cost efficient:

o Manageable: Sufficient insight in and control of the infrastructure;

o Accountable: Sufficient possibilities to charge and settle resources and accompanying costs.

(32)

Exhibit 4-1: Utility principles (adapted from Moerman, 2006)

4.4 Variables & Utility principles combined

The 89 variables which are preliminary grouped into broad categories, can be classified into the six utility principles. By doing so, it enables the possibility to perform the research in a structured way.

The value of server virtualization can be determined by researching the effects/results of server virtualization on the six utility principles. To do so, the 89 variables provide specific points to research within each principle. If information about these variables is collected, the net effect of server virtualization can be determined by the six utility principles. Exhibit 4-2 shows this research scheme in graphical form.

~~ ~~

Exhibit 4-2: Research Scheme

The result of the classification of the variables into the specific groups can be found in Appendix 1: Research variables.

DEVELOPMENT OF VIRTUALIZATION

SCALABLE ADAPTEBLE SECURE AVAILABLE MANAGEABLE ACCOUNTABLE

(33)

4.5 Questionnaire

The list of classified variables has become a research tool to guide this research project. However, it is still no concrete research instrument. The variables need a final preparation, to be applicable in the information gathering stage.

(34)

CHAPTER 5. EMPIRICAL RESULTS – HISTORY

In order to answer the second and third research question, empirical research has been executed to collect the necessary data. As has been mentioned in chapter three, the participating organizations in this research are customers and partners. The specific organizations that have cooperated, are listed in Appendix 3: Participating Organizations.

In this chapter, the research oriented towards the yet achieved value with server virtualization will be discussed. In effect, this is the second research question and also the first part of the twofold research objective. In the next chapter (chapter 6), the research oriented towards the value maximization in the future, will be explored. Logically, this is the third research question and also the second part of the main research objective.

5.1 Objectives, IT-Problems & Expectations pre-implementation

Server virtualization can help businesses technically and financially towards a higher level of maturity, solving different problems or bottlenecks in the (existing) IT-infrastructure (Golden, 2008). In the current academic and IT-literature, a lot of reasons have been given of why organizations should implement het (product) concept of server virtualization, some of them are mentioned in paragraph 2.3. However, which reasons/objectives do organizations really have for using the concept of server virtualization? This has been tested in the research and ultimately led to primary and secondary objectives.

The number of objectives varies across the organizations. As some organizations have one specific primary objective and a few secondary objectives, some organizations have set up significant lists of objectives.

Primary objectives

The number one objective that is mentioned in 65% of the cases, is to increase flexibility. In the current world, change is more and more prevailing at an increasing speed. Business processes change rapidly and the supporting IT-processes need to change in a same pace. By implementing server virtualization, organizations aim to realize a higher amount of flexibility in their IT-processes, to act and respond to changing environments.

(35)

Cost reductions is an equally mentioned objective. Whereas most participators use this objective to achieve a lower Total Cost of Ownership (TCO), some participators had specific items that they wished to decrease (such as hardware costs or energy costs).

Another primary objective is to create a manageable IT-infrastructure. As in the opinion of one of the participators, server virtualization should be the tool to adequately manage an (explosive) growing number of servers.

Server virtualization is also seen as a tool in order to realize a Real Time Infrastructure (RTI). This infrastructure is shared across (internal) customers, business units or applications where business policies and service-level agreements drive dynamic and automatic optimization of the IT-infrastructure. This would ultimately lead to reducing costs while at the same time increasing the agility and quality of service (Gartner, 2008). Secondary objectives

While the participating organizations had clear primary objectives, all of them also had several secondary objectives. In some instances, there was any overlap between these secondary and the preceding mentioned primary objectives. These will not be mentioned again.

With the growing number of physical server, the energy consumption (and costs) is equally rising. As energy becomes a global concern, server virtualization is used to reduce the energy consumption; both to decrease costs and to head towards a ‘green’ IT-environment.

As business processes are increasingly relying on supporting IT-systems, it becomes obvious that these systems should be running as much as possible, if not always. As 24-7 situations rise, server virtualization is used to create improved fail-over possibilities. Another objective was to prevent extensive investments. As one IT-manager faced the limitations of its current cooling capabilities, it would definitely need a heavy investment in cooling capacity. Server virtualization made this investment unnecessary.

Standardization of hardware has also been mentioned as an objective, as in many IT-infrastructure a wide range of

One of the participators, has the intention to be and stay innovative. As server virtualization is regarded as a hot technology (Network World, 2008/ Hayes, 2008), it can be used to support this (business) goal.

Expectations

(36)

One of these expectations was to have a more modern IT-environment with the implementation of server virtualization. Another expectation, was to set a step forward to the situation of ‘delivering capacity’. This participator strives to create a setting in which no longer physical servers are delivered to customers, but instead delivering server capacity (dedicated to a business process and unrelated to any physical server).

Subsequent to the objective of creating a manageable infrastructure, is the expectation to lower the amount of effort to manage the infrastructure. Thus, server virtualization should not only lead to simpler managing-tasks, it’s expected that it also will reduce the time needed to complete those tasks.

5.2 Perceived risks pre-implementation

Organizations have been asked which risks they perceived before implementing server virtualization.

Novelty turned out to be one of these risks. As server virtualization is/was a ‘new’ solution, the absence of comparable cases (of organizations that are already virtualized) made some organizations frightened for server virtualization. For example; one participator started in 2003 with their implementation. At that date, in the Netherlands there was no organization (in size comparable) that had implemented the specific server virtualization solution yet. This made the implementation hard to refer. Details and consequences of implementing server virtualization were only visible from theory, not from practice. Also, flaws and faults were not quite visible yet.

This risk is reasonable depending on the time (date) when server virtualization is implemented. Nowadays, server virtualization is implemented at many organizations, making the novelty risk somewhat smaller than it was several years ago. However, even though this risk has become smaller, organizations still face that the concept is new for the organization itself.

The risk of hardware failures has been mentioned frequently (by more than half the participators). Any used server hardware may show faults or defects, causing the server not working optimal or even shutting down completely. With physical servers, this would lead to one server facing difficulties. However, a failure in the hardware of a virtual server would lead to many more servers dealing with difficulties, making a hardware failure a much more serious incident.

(37)

The loss of performance has also been mentioned as a risk. Placing multiple virtual servers at a single piece of hardware, may cause a malfunctioning virtual server to affect other servers performance.

Concerning risks, there is some influence about the branch of organizations. Evidently, every organization has critical business processes. However, in some branches, business processes are that critical that organizations need to comply to guidelines or requirements. Examples of these organizations are banks or hospitals. In such situations, risks are perceived more intensely.

5.3 Macro-economical factors

All the participators have been asked if macro-economical factors had played any role, in deciding whether ‘to virtualize or not to virtualize’ or in the specific expectations that organizations had pre-implementation. These factors include items such as oil price (and thus energy price), dollar rates, the influence of other (related) companies, etc.

None of the participators claimed that these factors have played any role. The decision to virtualize is not based on any of these factors, but solely on ‘internal’ (IT-related) objectives. Although energy savings was part of the business case in some instances, it did not activate organizations to virtualize.

However, according to one participator, these factors will play a role in the near future. Since oil prices are rising and forcing organizations to cut costs, the IT department is also expected to minimize expenses. Therefore, these factors could be branch-related, as this participator is heavily depending on oil prices in their primary business processes.

5.4 Flexible – Scalable

An IT-infrastructure needs to be sufficient flexible to satisfy varying demand from the organization. There are some variables that could describe this flexibility, which will be explored here after.

Number of servers

As has been mentioned before in paragraph 2.3, server virtualization makes it possible to ‘consolidate’ servers: placing multiple servers at a single piece of server hardware, by creating a virtual machine for every server (Golden, 2008). It has been researched how many virtual machines organizations create at one physical server, thus exploring the ratio of ‘one server: # virtual machines’.

It reveals that this ratio is varying heavily. In ascending order: • 1: 7; • 1: 9; • 1:10; • 1:24; • 1:40; • 1:40;

(38)

It may be concluded that the ratios differ substantial, as the difference between the highest and lowest ratio is almost a factor 6. However, it must be noted that the number of virtual machines is affected by a number of factors. These factors are:

• Used hardware: As there is many different server hardware available, it influences the number of virtual machines. The number of CPU’s and the size of memory banks, affect the ratio.

• Characteristics VM: A VM differs from other VM’s concerning some characteristics. Examples of these are:

o Operating System installed in VM; o CPU and memory usage of application; o Database intensity of VM.

• Capacity surplus: Regularly, a host is not constantly full utilized. As the required capacity of VM’s fluctuate, there is a certain amount of host capacity that functions as surplus. The size of this surplus fluctuates, but on average, this is around 30%.

As organizations move their IT-infrastructure towards a virtual environment, the physical servers need to be migrated into virtual machines at hosts. As is mentioned in paragraph 2.6, this is called Physical to Virtual (P2V) (Virtualization.info, 2008). If physical servers are migrated to virtual, it is interesting to investigate how many physical servers are left. Said otherwise: what is the percentage of virtualized servers?

It turns out that this percentage varies widely among the participators. The lowest percentage is 10%, having almost three-quarter of all servers still physical. The highest ‘score’ is 93%, having almost all servers virtualized. Among the intermediate values are 23%, 50%, 75%, 88% and 90%, resulting in an average of 60%. Although the bandwidth of percentages is fairly wide (making the average less powerful), it could be stated that two-third of the servers are virtualized.

One critical aspect, is the time spent on migration. If one company has started their server virtualization project one year ago, and another one started three years ago, the figures become less comparable. However, the results are varying. One company has started one year ago and also has the lowest server virtualization rate. However, this is varying among the participators. For example, one company that started in 2006 has achieved a higher score than a company that started in 2005. It seems that time is not the only factor in server virtualization rates; policies and attitude will also likely play a role.

Referenties

GERELATEERDE DOCUMENTEN

Die diagnose van die polisistiese ovariale sindroom word gemaak by pasiente met 'n geskiedenis van infertiliteit, oligo-amenoree, hirsutisme en 'n biochemiese profiel in die bloed

Na de soms massale uitval van dahlia- knollen in 1999 is in eerste instantie onderzocht of de bacterie Erwinia chry- santhemi, de veroorzaker van verwel- kingsziekte in dahlia,

More concretely, virtualization in dispute resolution refers to distance hearings of witnesses, synchronous (e.g., video telephony) or asynchronous (e.g., e-mail) communication

We claimed that, in the area of administration, there can be efficient (re)use of information and availability of information regardless of place and time; in the area of

It has been indicated that black African youth in South Africa are influenced by dramatic changes related to their social realities, and that this contributes greatly to their

A structured questionnaire consisting of five sections namely, personal characteristics and socio-economic factors of irrigators, offences and conflict resolution in

The focus of this thesis is to determine processes and activities as part of the direct and indirect cost to consider in a simple costing algorithm for the SLS process, using