• No results found

Public sector benchmarking Multiple user groups and their characteristics: performance indicators and information processing

N/A
N/A
Protected

Academic year: 2021

Share "Public sector benchmarking Multiple user groups and their characteristics: performance indicators and information processing"

Copied!
73
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

1

Public sector benchmarking

Multiple user groups and their characteristics: performance indicators

and information processing

MASTER THESIS MSc BA Organizational & Management Control (27-06-2013)

Kira Beekhuis

(2)

2

Public sector benchmarking

Multiple user groups and their characteristics: performance indicators and

information processing

Master Thesis University of Groningen Faculty of Economics and Business

MSc Business Administration

Specialization: MSc BA Organizational & Management Control

Kira Beekhuis

Address: Herman Colleniusstraat 7 Postal Code: 9718 KP

City: Groningen Mobile: 0641587204 Student number: 1681664 Date of submission: 27-06-2013

First Supervisor Rijksuniversiteit Groningen: Dr. S. Tillema Second Supervisor Rijksuniversiteit Groningen: M. Paping, MSc

(3)

3 ABSTRACT

Public benchmarking differs from private benchmarking. In public benchmarking one can distinguish multiple user groups of benchmarking information: management, statutory bodies, and clients. These various user groups are interested in various performance indicator types. Besides their different preferences when it comes to performance indicators, the user groups also differ in their capacities and willingness to process information that comes from the benchmark. These differences are discussed. Subsequently, an interpretive case study is performed. We conclude that the preference of management for certain types of performance indicators changes when a benchmark is used by multiple user groups. Additionally, to achieve that the multiple user groups really make use of the benchmark information to achieve their objectives it is very important to provide tailor-made information to every user group. When there are multiple user groups for one benchmark, this can create serious problems which may lead to a depreciation of the benchmark information that is generated.

Key words: public benchmarking, performance indicators, capacity and willingness to process information.

BY ORDER OF THE RIJKSUNIVERSITEIT GRONINGEN AND SIGNIFICANT

This thesis is not only written by order of the Rijksuniversiteit Groningen, but also by Significant, a research and consultancy organization situated in Barneveld. The developed model can be used in the future by Significant. Significant is active in public benchmarking, in particular in the following phases: developing performance-indicators, measuring the performance on the indicators and reporting the outcomes. Although Significant is the principal, the model will be more generally applicable.

(4)

4 Table of contents

1. Introduction -6-

2. Literature and models -9-

2.1 Public sector benchmarking -9-

2.1.1 Development of the concept “benchmarking” -9-

2.1.2 Characteristics benchmarking in the public sector -10-

2.2 Performance indicators in the public sector -14-

2.2.1 Types of performance indicators in the public sector -14- 2.2.2 Users of different types of performance indicators -17-

2.2.3 Model -19-

2.3 Capacity and willingness to process information -21-

2.3.1 Capacity to process information -21-

2.3.2 Willingness to process information -23-

2.3.3 Capacities and willingness to process information per user group -25-

3. Research methodology -29-

3.1 Research type -29-

3.2 Data collection -30-

3.3 Data analysis -32-

4. Results and Analysis -33-

4.1 Case study 1: Benchmark WMO in municipalities -33-

4.1.1 The benchmark -33-

4.1.2 The performance indicators -34-

4.1.3 Use of the benchmark information by the user groups -34- 4.2 Case study 2: Benchmark Zichtbare Zorg / KiesBeter -36-

4.2.1 The benchmark -36-

4.2.2 The performance indicators -37-

(5)

5

4.3 Case study 3: KNMP -38-

4.3.1 The benchmark -38-

4.3.2 The performance indicators -38-

4.3.3 Use of the benchmark information by the user group -39-

4.4 Case study 4: Benchmark WWB in municipalities -39-

4.4.1 The benchmark -39-

4.4.2 The performance indicators -40-

4.4.3 Use of the benchmark information by the user group -40-

4.5 Case study 5: RBB Groep -41-

4.5.1 The benchmark -41-

4.5.2 The performance indicators -41-

4.5.3 Use of the benchmark information by the user group -42-

4.6 Cross-case analysis -42-

5. Conclusion and Discussion -46-

6. Literature -52-

Appendix A : Interview Case benchmark WMO -57-

(6)

6 1. Introduction

We live in a world with highly competitive markets and a rapidly changing global economy. Because of these circumstances, organizations adopt and implement a wide variety of innovative management philosophies, approaches and techniques (Dorsch & Yasin, 1998). Public organizations are no exception to this. With public organizations we mean governmental organizations and other not-for-profit organizations like health care organizations, educational establishments, and public transport organizations. Due to the aforementioned changing circumstances, the emphasis in public organizations is nowadays on customer value, stakeholders’ interests, and performance; public organizations also want to deliver ‘best value’ to their clients (Magd & Curry, 2003). A contract culture has developed; many public services are allocated by tendering processes, and specified performance-based contracts are formulated (Greiling, 2006). This change in culture in the public sector can also be seen in a development called New Public Management (NPM). The philosophy of this movement is to cut budgets while simultaneously improving the efficiency and effectiveness of the public sector (Van Thiel & De Leeuw, 2002:268). According to the NPM philosophy this can be done by introducing market forces and decentralization. This led to an increased attention towards performance measurement in the public sector (Jansen, 2004; Greiling, 2006). Before this new situation, the public sector was focused on inputs instead of results (outputs/outcomes) (Jansen, 2004). The change from an emphasis on inputs towards the new emphasis on the results (outputs/outcomes) of the public sector as a service provider can for example be seen in the coalition agreement of the Dutch government, an example of such a quote being: “To judge the effects of Youth Care, performance data will be delivered and published (Regeerakkoord, 2012:24).”

Summing up, it can be stated that there is an increasing attention towards performance measurement in the public sector. Part of this development is the increasing attention towards benchmarking in the public sector.

(7)

7

organizations that are recognized as representing best practices for the purpose of organizational improvement.” This definition, however, does not cover all purposes of benchmarking. As it focuses on

the purpose of organizational improvement, benchmarking is presented as a management tool. But benchmarking can have more purposes (especially in the public sector), for example the demonstration of public accountability (Bowerman & Ball, 2000).

When it comes to benchmarking in the public sector (and in a broader sense, to performance measurement in the public sector), an important question is: Which performance-indicators have to be used to compare the performance of organizations? This is particularly complicated in the public sector because performance is more difficult to measure in non-profit organizations than in profit organizations. One cannot easily rely on financial measures (like the ultimate financial measure in for-profit organizations: for-profit) in non-for-profit organizations. In other words, the public sector is often characterized by hard-to-measure activities and this leads to difficulties in selecting and interpreting appropriate performance measures (Cavalluzzo & Ittner, 2004:244). Even when it is possible to measure activities, benchmarking in the public sector can still be a problem because there is sometimes only one party which executes a task, so it can be hard to compare.

Hence, there is no single answer to the question which type of indicators have to be used to measure performance in public benchmarking. Complicating matters, there are differences between the

situations in which benchmarking is carried out. This implies that the obtained information from

benchmarking will be used by different actors for different purposes (Tillema, 2010). For example, a benchmark of hospitals may be used by health insurers to choose the hospitals from which they want to buy health care, while it may be used by patients to choose the hospital with the best medical treatment for their specific disease. These different users of benchmarking information could imply that different measures of performance are needed for each user group to generate the required information.

When the benchmarking information is generated, it has to be presented to the user group. The following question that arises is how the information has to be presented to create the right circumstances for optimal use of the information as the information processing capabilities and the willingness to process the information of the various user groups have to be considered.

This thesis will answer the following research question: Which type of performance-indicators should be

(8)

8 literature review. After the literature review, an interpretative case study is performed. This case study (consisting of multiple cases) is based on interviews with employees of various public organizations who make use of benchmarking.

(9)

9 2. Literature and models

This chapter consists of the following sections. In section 2.1 public sector benchmarking will be discussed. Firstly, the development of the concept “benchmarking” is discussed (2.1.1). Secondly, the specific characteristics of public sector benchmarking are discussed in comparison with the specific characteristics of private sector benchmarking (2.1.2). In section 2.2 performance indicators that can be used in benchmarking in the public sector are discussed. The various types of performance indicators are firstly discussed (2.2.1) and secondly these types of performance indicators are linked to the various user groups of the benchmarking information (2.2.2). A model is developed (2.2.3). After that, in section 2.3 the capacities (2.3.1) and willingness (2.3.2) of the various user groups (2.3.3) are discussed.

2.1 Public sector benchmarking

2.1.1 Development of the concept “benchmarking”

To understand the role of benchmarking in the public sector, it is important to have some knowledge of the development of benchmarking, which started in the private sector. In 1979 Xerox, an American multinational document management corporation was the first company that made use of benchmarking. Xerox examined its unit manufacturing costs by comparing the operating capabilities, features, and mechanical parts of their copiers with the copiers of their competitors. Based on the results of the comparison, Xerox scrutinized their product and processes. Benchmarking became a core technique to achieve superior quality in the products and processes of Xerox (Fong et al., 1998:407). One of the founders of the benchmarking literature was Robert Camp. He published the first book on benchmarking in 1989. He defined benchmarking as a search for and an implementation of best practices to achieve performance improvement (Camp, 1989). Camp developed a benchmarking model which consists of the successive steps the organization has to take in the benchmarking process: planning, analysis, integration, and action.

(10)

10 Summing up, although the benchmarking concept exists for decades, the definition of “benchmarking” and description of the steps in the benchmarking process have remained stable over time.

2.1.2 Characteristics benchmarking in the public sector

In the late 1990s, benchmarking was proposed as a management technique from which the public sector also could benefit (Bowerman&Ball, 2000). However, when it comes to public sector benchmarking, the theory of private sector benchmarking cannot simply be adopted, and a distinction between private and public benchmarking has to be made (Bowerman et al., 2002; Tillema, 2007). The main four sector differences are outlined in the following section. The differences are closely related to each other.

The first difference between public and private benchmarking is that private benchmarking is focused on best practices while in public benchmarking organizations can be focused on achieving an acceptable or average level of performance (Bowerman et al., 2002). This difference is due to the near absence of market mechanisms such as competition in the public sector. The presence of market mechanisms implies that consumers have a choice: consumers can compare the characteristics of the various products or services and draw their conclusions on which supplier fits best with their needs. In the public sector this consumer mobility is limited (Van Helden & Tillema, 2005). This difference can also be seen in the distinction between indicator-benchmarking and idea-benchmarking. Indicator-benchmarking (results-level) is directed towards comparing the performance of an organization, while not being concerned with the underlying processes that have led to that performance. Idea-benchmarking (process-level), conversely, is aimed at comprehending the underlying processes (Fedor et al., 1996; Tillema, 2010). In the definitions of benchmarking at the beginning of section 2.1, looking for ‘best practices’, and hence, processes, is an important element. These definitions show resemblance with idea-benchmarking. In practice, however, when it comes to public benchmarking, the focus is on indicator-benchmarking (Ammons et al., 2001). Where the focus in the private sector is mainly on idea-benchmarking, the focus in the public sector is mainly on indicator-benchmarking. This means that public organizations often make use of benchmarking to formulate an answer to the question: ‘How does our performance compare to comparable organizations (indicator-benchmarking)?’ instead of: ‘How do we, and our benchmark partners, come to this performance (idea-benchmarking)?’

(11)
(12)

12 performance improvement (which is the only objective in the private sector), there are other objectives (from the view of management): demonstrate accountability towards the public, comply with the external demands for comparative data, and defending or justifying performance (Bowerman & Ball, 2000). These different natures and objectives are not mutually exclusive. The different natures and objectives in public benchmarking also come from the fact that there are, besides management, multiple initiators of benchmark processes with various intentions. These intentions can vary: creating an internal management tool, but also creating economic or institutional pressure (Tillema, 2010). The various initiators can be categorized in the following groups: management, and then the stakeholders who are subdivided into statutory bodies, and clients (organizations). Note that citizens do not have their own group; an individual citizen is not powerful enough to start a benchmark and use the benchmarking information (Brignall & Modell, 2000). The statutory bodies thereby represent citizens. The various initiator groups will be shortly described.

a) Management

This group consists of the management of the public organization that is performing the benchmark.

b) Stakeholders: Statutory bodies

Statutory bodies have a formal role in the steering and control of public organizations. Statutory bodies can be elected bodies (democratic control) like a local council, supervisory bodies like an inspector, and funders or purchasers.

c) Stakeholders: Client (organizations)

Clients cannot on their own initiate a benchmark and need representatives in the form of client organizations for this.

(13)

13 do not necessarily have to belong to the same group as the (primary) users. For example, a statutory body can initiate a benchmark which is intended to be used by the user group of clients.

Summing up, the essential differences between private and public can be seen in Table 1 below.

Table 1 Differences between private and public benchmarking Differences private and public benchmarking

Private benchmarking Public benchmarking

 Focuses on best practices

 Emphasis on idea-benchmarking

 Focuses on acceptable performance

 Emphasis on indicator-benchmarking

 Relatively straightforward development of performance indicators

 Many difficulties with developing performance indicators:

a)disagreement on which performance indicators are right;

b) politics may change what desired performance is;

c) translation of rather vague plans into measurable performance indicators;

d) interpretation of the performance indicators.

 Voluntary in nature

 Benchmarking has a single objective

 One initiator (management)

 Can be either voluntary, compulsory, or defensive in nature

 Benchmarking has multiple objectives

 Multiple initiators (management, statutory bodies, client (organizations))

 Information is only disclosed inside the organization

 One user group (management)

 Information is also disclosed outside the organization

(14)

14 2.2 Performance indicators in the public sector

In order to benchmark, you need to know what you want to benchmark. On which aspects will the comparison between two or more entities be made? What do you mean by the performance of the organization? In this section we will review the literature on performance measurement in the public sector.

2.2.1 Types of performance indicators in the public sector

The two most well-known models when it comes to performance measurement in the public sector are the ‘3Es’ and the ‘IOO’ model. The 3Es model (described in Walker et al., 2010:8) distinguishes economy (costs of inputs), efficiency (cost per unit of output), and effectiveness (achievement of the objectives). The IOO model (described in Walker et al., 2010:9) distinguishes inputs-outputs-outcomes. This distinction between outputs and outcomes is important; outcomes are the consequences of the produced outputs. Both models, the 3Es model and the IOO-model, are based on the sequence of steps in a process of service production (Boyne, 2002:17). Jansen (2004) adds an indicator type to the IOO-model. He distinguishes four types: input (resources), throughput (production processes), output (products), and outcome (effects).

Besides the abovementioned models, there is an upcoming type of performance indicators; quality is getting more and more important when it comes to performance measurement (Cavalluzzo & Ittner, 2004). This can be seen in the Netherlands as well. Quotes such as “We (…) set higher quality standards

for teachers and leaders of schools (Regeerakkoord, 2012:1)”, “A strong economy benefits from a high quality of the provision of services by the government (Regeerakkoord, 2012:2)”, and “First of all, we want to improve the quality of the delivered care (Regeerakkoord, 2012:21)”, all are examples of this

(Regeerakkoord VVD & PvdA, 29 oktober 2012). Smith (1990) already stated that there was an urgent need in the public sector for reliable measures of quality. Although quality in the public sector is often defined as achieving a maximum of customer satisfaction (Loffler, 1996:2-3), quality-indicators can refer to input, throughput, output, and outcome. The difficulty with performance indicators of quality is that quality is hard to define, and a complex and hard-to-measure concept (Loffler, 1996).

(15)

15 (output/outcome). We made the distinction between the steps in the production/service process (input-throughput-output-outcome). Within every step, one could generate more quantitative performance information by making use of “3Es” indicators (economy-efficiency-effectiveness), and/or one could generate more qualitative information by measuring quality (the quality of input-throughput-output-outcome). Mind that with ‘qualitative information’ we mean ‘information concerning the quality of input/throughput/output/outcome’. Qualitative information can however be generated by making use of qualitative and quantitative indicators As one can see, input is linked to economy, throughput is linked to efficiency (efficiency is most aimed at the production processes, to convert inputs as efficient as possible into outputs), and output and outcome are linked to effectiveness.

Figure 1 Types of performance indicators in the public sector

Input

(16)

16

Throughput

This category of indicators is about the processes of the organization. One may judge throughput in a quantitative way by measuring the efficiency (cost per unit of output, in other words: how efficiently is the organization converting inputs into outputs?). Throughput can also be measured in a qualitative way by judging the quality of the production/service processes which means that the way output/outcome is generated will be judged. In this situation, the question can be if employees behave in the desired way for certain outputs/outcomes? If the ability to measure output is low, and the knowledge of the desired behavior and actions is high, throughput can be the right dimension to judge performance (Ouchi, 1979; Merchant & Van der Stede, 2007). However, in the public sector, one may not always know what the desired behavior/actions are for a certain output or outcome (Hofstede, 1981). For example, what do police officers have to do to create a sense of safety in society? It may also be difficult to use throughput-indicators due to the fact that in public organizations activities are often not of a repetitive nature. For example, what actions are desirable for a unique campaign?

Output

A key element of the reforms in the public sector is the move away from input-oriented performance indicators towards output-oriented performance indicators (Erridge et al., 1998; Greiling, 2005). However, these indicators are difficult to use when objectives are ambiguous or when outputs are non-measurable. In the public sector, objectives are often ambiguous due to conflicts of interests/changing situations in politics; the people who have a say in the activities may not share the same view on the objectives of these activities (Hofstede, 1981). Outputs of public activities are often not measurable, because they are frequently defined in qualitative and rather vague terms (Hofstede, 1981; Jansen, 2004). When objectives are non-ambiguous and outputs are measurable, it is possible to make use of this category of indicators (Ouchi, 1979; Merchant & Van der Stede, 2007). Output can be measured quantitatively (effectiveness; did the organization generate the (number of) desired outputs?) and qualitatively (are the outputs of the desired quality?).

Outcome

(17)

17 influenced by external factors that are difficult to control for. Outcome can be measured quantitatively (effectiveness; did the organization generate the desired outcome?) and qualitatively (is the outcome of the desired quality?).

2.2.2 Users of different types of performance indicators

Due to the fact that there are multiple possible initiators of the benchmark with multiple objectives, the focus is on various types of performance indicators. Hopefully, the initiator of the benchmark will try to collect performance indicators that the user of the information is interested in. As the definition of performance depends on the user of the information, the choice for either input/throughput/output/outcome will also depend on the user of the benchmark information (Jansen, 2004). We now will discuss the performance indicators that the users are interested in per user group.

Management

As this thesis will argue, management can and does make use of multiple indicators. The focus however will be on input, throughput, and output; the focus of managers is on efficiently producing outputs from inputs (Jansen, 2004:56). Management is from all the user groups the closest to the process, and to the information of the process, and thereby is most of the time able to translate output indicators into throughput and input indicators. In general, we can state that management has the tendency to be more focused on financial information than on non-financial information (Cardinaels&Van Veen-Dirks, 2010). However, due to the difficulties in the public sector with measuring performance in financial terms, this cannot be stated that easily on managers of public organizations. Nevertheless, it is valuable to keep that in mind that in general management is inclined to make use of quantitative information.

 Management will make use of input indicators in a quantitative sense because they are common used, readily available, easy to measure, and easy to interpret. Besides that, nowadays the focus on lowering budgets creates a focus on these indicators. Input indicators that generate qualitative information will also be used due to the increased focus on quality in the public sector, which starts with the inputs;

(18)

18 seen in the nowadays widely used Total Quality Management approach, which tries to ensure that with every step in the production or service delivery process quality/value is added for the customer (Massey, 1999);

 Management will make use of output indicators in quantitative (when outputs are non-ambiguous and measurable) and qualitative sense (public organizations want to deliver best value to their clients) to measure the performance of the organization;

 Outcome indicators can be used by management, but are not commonly used due to the fact that management has less control on the outcomes of the organization than on their outputs, and also possesses less information on their outcomes than on their outputs (Jansen, 2004).

Statutory bodies

 The user group of statutory bodies is a rather broad user group. Some statutory bodies will be interested in input indicators, for example supervisory bodies. These supervisory bodies are interested in the way the budgets of the public organization are spend (quantitative). Other statutory bodies will not make use of input indicators because they do not have the required data to abstract useful information from it, and it would be too detailed for their interests (Jansen, 2004);

 Some statutory bodies will make use of quantitative throughput indicators (efficiency indicators) because they are interested in the way the budgets of the public organization are putted into the production processes. Statutory bodies will not make use of qualitative throughput information because they do not easily receive the required data, most of the time do not have the required knowledge to understand the process and determine the desired actions/behavior, and/or it would be too detailed for their interests (Jansen, 2004);

 Statutory bodies will make use of output indicators because this is performance that really affects them. For example, politicians are the ones who select the outputs to materialize the outcomes; based on the desired outcome, the desired outputs are determined (Jansen, 2004). They want to know if the right outputs are produced (in quantitative and qualitative sense), not in what way these outputs are realized exactly;

(19)

19

Clients

 Clients will not make use of input indicators because they do not have the required data to abstract useful information from it, and it would be too detailed for their interests (Jansen, 2004);

 Clients will not make use of throughput indicators because they do not have the required data, do not have the required knowledge to understand the process and determine the desired actions/behavior, and/or it would be too detailed for their interests (Jansen, 2004);

 Clients will make use of output indicators, especially qualitative ones. This means they focus on indicators that are built on their expectations of what they think high quality of the output of a public organization is (Loffler, 1996);

 Clients will make use of outcome indicators (especially qualitative ones), because it is the outcome that affects them. This means they focus on indicators that are built on their expectations of what they think high quality of the outcome of a public organization is (Loffler, 1996).

2.2.3 Model

(20)

20 Figure 2 User groups and indicators in public benchmarking processes

USER GROUPS

TYPE OF INDICATOR

(21)

21 2.3 Capacity and willingness to process information

We have discussed the specific characteristics of public benchmarking and developed a model with the performance indicator types that the various user groups in public benchmarking are interested in. When a benchmark has to be performed, we know which performance indicator types to use for certain user groups. After the benchmark is performed, the results on the performance indicators have to be presented to the specific user group. If we would live in a perfectly rational world, section 2.2 would be sufficient. However, when we describe public organizations and their management, statutory bodies, and clients, we describe people. The human being has, unfortunately, to deal with some limitations. These limitations influence the way people deal with (benchmark) information; their capacities and willingness to process information.

2.3.1. Capacity to process information

The capacity to process information depends on the information processing capacities of the individual and/or the organization. Information processing means “the way in which information is retrieved,

transformed, reduced, elaborated, stored, recovered, and retrieved (Blackwell et al., 2006).”

When it comes to individuals and their information processing capacities, it is important to notice that individuals have to deal with bounded rationality (Simon, 1979). This means that people are restricted by a variety of constraints when processing information and making decisions (Buelens et al., 2006). These constraints can be personal or environmental characteristics that prevent rational decision-making. Examples of these constraints are: complexity of the problem, uncertainty of the problem, limited capacity of the human mind, amount of information, timeliness of the information, importance of the decision, and time demands (Simon, 1979:510). In other words, the information processing capacities of an individual are limited. We will discuss the limitations by discussing the parts of information processing theory that are especially important when it comes to processing the information that comes from the benchmarking report: the amount of information, the indicators that are used, the communication of the information, and the presentation of the information.

Amount of information

(22)

22 between diminishing the number of indicators for reasons of convenience and time constraints, while on the other hand there are so many ways to measure and it is always possible to know more (Van Thiel&Leeuw, 2002). The limitations on the human capacity to process information come from the fact that there is a low capacity in the short-term memory and information is slowly stored in the long-term memory (Thompson&Cats-Baril, 2003). Most people are only able to consider up to seven performance measures (Atkinson&McCrindell, 1997). This number comes from a basic theory about memory (Miller, 1956), which states that the short-term memory could hold 5-9 chunks of information (this phenomenon is called ‘the magical seven plus or minus two number’) where a chunk can be any meaningful unit. Besides other causes, too much information can lead to information overload. Information overload occurs when “the information we have to work with exceeds our processing

capacity (Buelens et al., 2006:315).” With too much information, decision effectiveness can diminish

(Keller&Staelin, 1987). To prevent for information overload, summaries with the most important information can be used (Buelens et al., 2006).

Indicators

Due to the limitations of the human mind, it is important to develop only indicators for all important aspects of performance (Van Thiel&Leeuw, 2002). Besides that, indicators have to be SMART: Specific, Measurable, Achievable, Relevant, and Time-related (Loffler, 1996:8). Another author states that information has to be accurate (free from errors), complete (covering all important indicators), current (reflecting existing circumstances), timely (available in sufficient time to process it), and relevant (Beynon-Davies, 2009).

Communication of information

(23)

23 system is determined by its learnability, remember ability, efficiency in use, reliability, and the user satisfaction it generates (Beynon-Davies, 2009).

Presentation of information

The way in which information is presented is also very important to process the information in the right way. It is closely related to the communication of information but due to the importance we treat it as a separate field of interest. The fact that presentation influences information processing indicates that the user has cognitive limitations (Van Veen-Dirks, 2012:18). Some general advice for the presentation of information are that a standardized, comparable layout is preferred, visualization of the content helps to interpret the information, and comments about the performance have to be included (Brun&Siegel, 2006:491). Comments on the performance are important for interpretation of the information; for example explaining comments on what the indicators and their scores mean or making use of summaries (Brun&Siegel, 2006). Although visualization and comparability are important, one has to deal with it carefully. In benchmarking for example league tables (another term is ranking chart) are a commonly used form to present the results of a benchmark, although there is a lot of criticism on this way of presenting information (Tillema, 2010). This criticism exists because league tables try to report the performance of an organization in one single number, which leads to a doubtful reliability. Another aspect which has to be used carefully is the use of color as an attention mechanism; people response to colors in different ways (Mandel, 1997). In other words, presentation issues are the clarity of the information (it has to be understood for its intended purpose), the detail of the information (should be sufficient for its intended use), and the medium that is used (should be appropriate for its use, visualized in the right way) (Beynon-Davies, 2009).

2.3.2 Willingness to process information

(24)

24 Amount of information

The information processor has to be willing to search for/go through the information. This simply is a cost/benefit perspective; the time and effort that must be spend have to be smaller than the perceived benefit of searching for/going through the information (Blackwell et al., 2006). This means that the amount of information is important; more information means more time/effort, which means that the benefits of more information have to balance this extra time and effort. An information overload leads to unwillingness to process the information (Eppler&Mengis, 2004).

Indicators

Again, it is important that indicators have to be SMART. If, for example, a person believes the indicators are not relevant, he or she is less willing to process the information that comes from the indicators. After all, non-relevant information is not worth the time and effort to process and make use of it. The used indicators have to be perceived as indicators with good quality and the information that comes from the indicators has to be seen as usable.

Communication of information

The communication of the benchmarking information is very important to create the willingness to process the information. Again, the personal characteristics of the sender and the receiver are very important. For example, it is possible that the receiver of the information is very dogmatic and is not willing to consider information that differs from a preconceived view (Thompson&Cats-Baril, 2003). To create willingness by communicating the information it is useful to make use of information systems. Information systems are able to address individual preferences and differences (Thompson&Cats-Baril, 2003).

Presentation of information

(25)

25

2.3.3 Capacities and willingness to process information per user group

Every user group has its own characteristics. This also becomes visible in the capacities and willingness to process the benchmark information; differences can be noticed between the various user groups. We will describe the specific user group characteristics per user group.

Management

When it comes to the amount of information, management is most capable to deal with big amounts of information in comparison with the other user groups. At the other hand, benchmarking reports have to be manageable; management often has tasks that are brief, fragmented, and varied (Thompson&Cats-Baril, 2003). Bulky reports end up under a big pile of paper; management is in this case not only unable but also unwilling to process the information. In general, we could say that performance reports should be no longer than ten pages (Brun&Siegel, 2006).

Management is of all the user groups most close to the production or service delivery process. This means that this user group is most able to deal with imperfections of the indicators and the generated information. This is because management is able to explain certain results and knows where the information comes from. It is, however, important for management to believe that the indicators are SMART formulated; otherwise they can be unwilling to process the information.

The communication of information inside an organization is nowadays a hot issue. Knowledge management and creating a learning organization are concepts that illustrate this. Knowledge management is “the management of information, knowledge and experience available to an

organization, its creation, capture, storage, availability and utilization, in order that organizational activities build on what is already known and extend it further (Buelens et al., 2006:650)”. A learning

organization is an organization that “proactively creates, acquires, and transfers knowledge and that

changes its behavior on the basis of new knowledge and insights (Buelens et al., 2006:650)”. Knowledge

(26)

26 learning culture (Buelens et al., 2006). In public organizations the right culture also means that management has to be intrinsically motivated; the market offers little extrinsic motivation (Osterloh, 2000). Besides an optimal culture for information processing, an optimal organizational design is also very important for management to be capable to process the information (Eppler&Mengis, 2004). For example, changes in the organizational design can lead to more difficulties in communicating information and coordinating.

The presentation of the information has to be aimed at the specific user group management. This means for example that the terms that are used have to be familiar to management (Beynon-Davies, 2009). For management, this is not difficult; jargon can be used. It is important to realize that presentation can have an influence on the focus of management; by presenting the information in certain manners (adding + and – signs to the information) the chance increases that non-financial information is used in the judgment of management too (Cardinaels&Van Veen-Dirks, 2010).

Statutory bodies

Statutory bodies can handle relatively big amounts of information. In a statutory body there are generally several people who can process the information that comes from the benchmarking report. Statutory bodies will make their own cost/benefit assessment. If the costs are too high (due to too much information) in comparison with the benefits, statutory bodies will be unwilling to process the information.

For statutory bodies it is more difficult, than it is for management, to use the results on the performance

indicators to come to useful information. This is because a lot of information is context specific which

means the information can only be interpreted right when the evaluator has enough information on the context of the situation (Van Veen-Dirks, 2012). For outsiders (statutory bodies) this is more difficult than for insiders (management). So it is even more important that indicators are SMART formulated. Statutory bodies also have to believe themselves the indicators are SMART, otherwise they will be unwilling to process the information.

(27)

27

The presentation of the information has to be aimed at the specific user group statutory bodies. This

means for example that the terms that are used have to be familiar to statutory bodies (Beynon-Davies, 2009). The information has to be clear to the statutory body, the detail should fit with the interest of the statutory body, and the medium that is used has to fit with the preference of the statutory body.

Clients

The capacities and willingness of clients to process information differ from client to client; the user group is enormously varied due to individual differences (personality/knowledge etcetera), environmental influences (culture/family/social class), and psychological processes (learning/attitudes) (Blackwell et al., 2006).

The amount of information a client can handle is the most limited from the user groups. The difference with the other user groups is that this group is a group where the information is purely assessed by an individual. As stated earlier, an individual can only assess around seven chunks of information. An information overload on the client will lead to unwillingness of the client to process the information. For clients it is more difficult than it is for management to use the results on the performance indicators to come to useful information. This is because a lot of information is context specific which means the information can only be placed right when the evaluator has enough information on the context of the situation (Van Veen-Dirks, 2012). For outsiders (clients) this is more difficult than for insiders (management). When it comes to the indicators, the user group of clients is very difficult. Every client might need five other indicators to assess the performance of the organization; what is relevant to one client, might be irrelevant to another client. The client has to believe the indicators are SMART for his/her specific situation; otherwise the client will be unwilling to process the information.

The communication of the information to clients is very important. The communication channel has to deal with enormous numbers of users who all have different information processing capabilities and a different willingness to process the information. For example, when a website with the benchmarking information is used, this will fit in general more with younger people than with older people. When the channel is inappropriate for the user, the client will be unwilling to process the information.

The presentation of the information has to be aimed at the specific user group of clients. This means for

(28)

28 can choose the indicators that he/she are interested in. This will create willingness to process the information. The information has to be clear to the client, the detail should fit with the interest of the client, and the medium that is used has to fit with the preference of the client.

(29)

29 3. Research methodology

This section elaborates on the research methods used to answer the research question: “Which type of

performance-indicators should be selected for various public benchmarking user groups and how should they be used in preparing benchmark reports for these groups?” Firstly, the research type is discussed,

secondly the data collection methods, and lastly the data analysis methods. 3.1 Research type

The literature review resulted in the conclusion that the performance indicators that have to be used are dependent on the user group of the information. The capacity and willingness to process information are also dependent on the specific user group. However, it remains to be seen if the literature matches reality. Can we distinguish various initiators and user groups in public benchmarking situations in practice? Do these user groups have various interests, which lead them to the use of various performance indicators? Do these user groups differ in their capacity and willingness to process the information? Empirical research was needed. What follows can be characterized as an interpretive case study (Andrade, 2009; Yin, 2003). Yin (2003) defines a case study as follows: “An empirical inquiry

that investigates a contemporary phenomenon within its real-life context”. The study consists of

explanatory analyses of a limited number of benchmarks in various organizations. A case study can be seen as an intensive study of a unit (in this case a unit was a specific benchmark) for the purpose of understanding a larger class of units. In this thesis there were multiple case studies performed to assess benchmarks of public organizations in general. We made the choice for a case study instead of more statistical research like surveys because case study offers the possibility for in-depth research. The comparison and the complementation of literature and practice will be best investigated by revealing opinions of practitioners in in-depth conversations. We wanted to hear from the practitioners themselves how they experience the benchmarks and their reports and what they experience as problems and what they see as solution to these problems. The advantage of an interpretative case study is that it offers the researcher the opportunity to gain a holistic understanding of the subjects under investigation (Andrade, 2009). The interpretive approach contains the opinion that “reality is

socially constructed and the researcher becomes the vehicle by which this reality is revealed (Andrade,

(30)

30 conducting case study research we would like to compare the literature with reality, draw conclusions based on this comparison, and make some recommendations to improve the benchmarking practices in the public sector of the Netherlands. The choice for the cases that can be seen in the table below is based on the fact that we wanted to show some cases from governmental organization and other not-for-profit organizations, cases with multiple and single user groups, and cases with benchmarks that are experienced as problematic or that are experienced as successful what concern the use of the benchmark information by the user groups.

3.2. Data collection

The data in this thesis is collected through semi-structured interviews. The choice for interviews is made because interviews offer the opportunity to gain more in-depth knowledge of a certain subject than for example the use of questionnaires. Before collecting the data, multiple conversations were held with employees of Significant who are working in various areas of the public sector. These conversations were enlightening to get to know what sorts of benchmarks are performed in the public sector and which people would be interesting to have an interview with. The people who would be interesting for this research were contacted. After these people were contacted, e-mails were sent to the interviewees with information concerning the research question of this thesis and the topics of the interview. During the interviews, an interview questionnaire was used but this questionnaire was not strictly followed. There was enough room to discuss the specific situation the interviewee was specialized in or had most knowledge of. Some interviews were one-to-one; other interviews were one-to-two. Notes were made during the interviews and the interviews were also recorded. The main topics of the interviews were: the benchmark in general, the purpose of the benchmark, the development of performance indicators, the type of indicators, the users of the benchmark information, the capacity and willingness of all the users to use the information. The opinion of the interviewee on these topics was extensively discussed. For an example of an interview questionnaire, see Appendix A (a questionnaire for the case on the benchmark WMO).

(31)

31 telephone. The interviews can be grouped into various case studies; every benchmark is treated as a separate unit/case, every case has its own specific characteristics and user groups and these aspects are intensively described and analyzed. This can be seen in Table 2 below. The abbreviations in the table, the characteristics of the benchmark, the precise content of the benchmark, the specific user groups that use the information from the benchmark, and so forth, are discussed in the Results and analysis section. Case 1 can be seen as the main case. This is because in this case all user groups were represented and we interviewed all of them. We also conducted multiple interviews for a specific user group, thereby generating the most complete view on the benchmark. The other cases are analyzed to complete the information we generated with case 1.

Table 2 Outline of the cases and corresponding interviewees 1. Benchmark WMO in municipalities

Interviews

- User group management: Director WMO and social services (municipality of Heerenveen) - User group management: Two policy advisers WMO (municipality of Leiden)

- User group management: Coordinating project leader WMO (municipality of Skarsterlân) - User group management: Policy adviser WMO (municipality of Leusden)

- Initiator, developer, and performer of the benchmark: Researcher benchmark WMO (SGBO)

Mail/telephone survey

- User group clients: Member of WMO advisory body Leiden

- User group statutory bodies: Two members of the local council Leusden

The positions of the interviewees in the municipality were named differently. To prevent for confusion, we will from now on call all interviewees of the user group management in case 1 ‘local government official’.

2. Benchmark Zichtbare Zorg / KiesBeter Interviews

- Developer of the benchmark: Researcher methodology (Zichtbare Zorg)

- Developer and performer of the benchmark: Product manager KiesBeter (KiesBeter)

Mail/telephone survey

(32)

32 3. Benchmark KNMP

Interviews

- Initiator, developer, and performer of the benchmark: Researcher (KNMP)

- User group management: Pharmacist (KNMP) 4. Benchmark WWB in municipalities

Interview

- Initiator, developer, and performer of the benchmark: Senior researcher benchmark WWB

(SGBO)

5. Benchmark RBB Groep Interview

- Initiator, developer, and performer of the benchmark: Program manager (RBB Groep)

- User group management: Program manager (RBB Groep), he is working at one of the participating organizations

3.3 Data analysis

(33)

33 4. Results and Analysis

The results of the various case studies will be discussed and analyzed in the following sequence. Firstly, the characteristics of the specific benchmark (background, goal, initiator, user groups) are discussed. Secondly, the design of the benchmarking is discussed: the performance indicators that are used. Thirdly, the capacity and willingness of the user groups to process the information is discussed per user group, in combination with their need for certain types of indicators. For the sake of convenience and readability the case descriptions are quite short. More information on the case (like backgrounds, additional insights, quotes from the specific interviewees, and explanations of the statements in the case description) can be found in the appendix B (Case reports). After discussing the cases separately, a cross-case analysis will be performed. The current system of benchmarking in the public sector will be compared to the literature.

4.1 Case study 1: Benchmark WMO in municipalities

4.1.1 The benchmark

The benchmark WMO is a benchmark on the performance of municipalities on a specific Dutch law: the Wet Maatschappelijke Ondersteuning (Law for Social Support). This law arranges support for people who are disabled in any way and is also concerned with more ‘broad’ social issues. The WMO consists of nine performance domains like improving the livability in districts, encouraging participation in society, and taking care of addicts. The municipalities are obliged to deliver figures on their performance on the WMO to the national government. Here, municipalities can call in the help of SGBO. SGBO is a research organization that is specialized in benchmarking. SGBO offers the following service: SGBO accounts for the delivery of the demanded figures to the national government and at the same time performs a benchmark among the participating municipalities. In 2012, 124 municipalities joined the benchmark WMO performed by SGBO. The initiation to start a benchmark WMO comes from SGBO. The initiation to join this benchmark comes from management itself: management can decide whether to join the benchmark WMO at SGBO or not. SGBO develops and performs the benchmark.

(34)

34 are represented in statutory bodies like the local council. Although SGBO does not mention it in their benchmark reports, there is a possible fourth objective for management to join the benchmark that is the vertical accountability towards the national government: the ministry of Health, Welfare and Sports (in Dutch: Ministerie van Volksgezondheid, Welzijn en Sport (VWS)). It pays off for municipalities to join the benchmark WMO of SGBO, because SGBO will take care of the delivery of the performance information to the ministry. The ministry (national government) however cannot be seen as a user group because the ministry does not make use of the specific benchmark information. The local government officials all stated that they, as management, initiated to join the benchmark to serve all objectives, and thereby, serve all user groups (management, statutory bodies, and clients) with benchmark information.

4.1.2 The performance indicators

According to the SGBO researcher, the performance indicators are aimed at all the performance indicator types we defined as input, throughput, output, and outcome: “We ask for a lot of things: policy

efforts, the design of their processes, their results and effects.” However, if we take a look at the

benchmark report we can see that some performance indicator types are much more asked than others. The performance indicators are formulated per performance domain of the WMO. For example, on performance domain five (Living, Care, and Accessibility) there are formulated nine indicators. Eight indicators are aimed at input and throughput, and one is aimed at output. This division of performance indicators is representative for the rest of the report; many indicators on input (presence of a desk) and throughput (waiting periods for certain institutions), less indicators on output (experience of clients with, for example, accessibility public buildings) and outcome (social quality of the environment). The indicators are mainly quantitative or dichotomous (yes/no questions) in nature.

4.1.3 Use of the benchmark information by the user groups

Management

Management is seen as the primary user group of the benchmark information. According to an official (municipality Leiden): “The benchmark is an interesting comparison instrument. The information from

the benchmark is mainly used internally.” Another official (municipality Leusden): “The essence of the benchmark is the comparison with other municipalities with the ultimate goal of improving ourselves.”

(35)

35

where changes in our WMO policy come from, the benchmark is on the last place. Changes come from what the cabinet wants to change and from what aldermen want. It sometimes becomes an obligatory comparison; we do not do a lot with it when it comes to our policy.” One of the reasons for not making

use of the information is already integrated in the statement of the official from Leiden: politics is whimsical and most of the time gets priority.

The reasons for not using the information from the viewpoint of management itself are diverse. The main reason for not making use of the information is that management doubts the indicators that are used. Two out of four municipalities stated that they were interested in input indicators. Despite of that, every local government official stated that management is most interested in outcome indicators, the effects of their policies. The current indicators are not seen as SMART due to the fact that all officials stated a lot of the indicators were seen as not relevant (too much focus on efforts instead of effects), they doubted the comparability between municipalities, and the timeliness of the information was stated as a problem by one municipality. Another reason for not making use of the information is that the amount of information is seen as enormous; there are way too many indicators according to all the local government officials. Every performance domain consists of approximately ten indicators. Every indicator consists of multiple (most of the time dozens) lower-level indicators. The quality of the communication and presentation of the information is also questioned by management. SGBO tries to communicate the most important information by offering the benchmark reports, organizing meetings with multiple municipalities, and organizing personal conversations. The municipalities make use of these possibilities (some more than others) but have their doubts concerning the value of these opportunities. Besides that, it emerged from the interviews that the receiver of the communication (management) has to believe that the information is representing reality truthfully (and this is often not the case); otherwise management is unwilling to process the information. The information is presented by making use of thermometers. Unfortunately, it is not always clear to management what the information means for them.

Statutory bodies

(36)

36 The local council members are most interested in input and outcome indicators. The council members question the value of the information because the indicators that are used do not convince them. One member stated that it is unclear if the indicators represent the true performance of the municipality; another member questions the comparability of the information between municipalities. When it comes to the communication and presentation of the information, the local council members do not know exactly in which way the information is communicated and presented.

Clients

The user group of clients is represented in the WMO participation council. The local government officials differ in their view; some state that these councils do not make use of the information while others state that they do. The participation council member states that they do make use of the benchmark as a comparison tool.

The participation council member stated that they were interested in outcome indicators but that they experienced difficulties with the interpretation of the indicators, they doubted the truthfulness of the representation of reality, and they doubted the comparability of the indicators. The participation council member was positive on the communication of the information and the meeting that followed with the municipality, but did experience some difficulties in the presentation of the information: especially concerning the terminology and the understanding of the meaning of the results.

4.2 Case study 2: Benchmark Zichtbare Zorg / KiesBeter

4.2.1 The benchmark

(37)

37 The user group, at which KiesBeter is aimed, is the user group of clients. Zichtbare Zorg was in the past aimed at more user groups (the care inspection (IGZ), health insurers, care professionals, ministry of Health, Welfare and Sports). Meanwhile, Zichtbare Zorg is merged into the new quality institute, where the focus is also on the specific user group of patients (clients). However, in the interview the researcher of Zichtbare Zorg stated that IGZ also makes use of the information from the benchmark (to search for organizations that perform at an unacceptable level). Although IGZ also makes use of the information, they make use of the information that is part of the benchmark, not from the benchmark itself. In other words, the client user group is definitely the primary user. The researcher from KiesBeter stated that the objective of the benchmark is to show information that a patient needs when making a choice for a specific care organization.

4.2.2 The performance indicators

Input indicators are used, Zichtbare Zorg and KiesBeter call these indicators structure indicators. These structure indicators can be either on the care-content (zorginhoudelijk) level (what does the care look like, what is the quality of the care delivered?) or on the so called etalage plus level (how well is the care organized?). Throughput indicators are also used. Zichtbare Zorg and KiesBeter call these indicators process indicators. These process indicators can be either on the care-content (zorginhoudelijk) level or on the client experience level (how does the client experience the delivered care?). There is an emphasis on the process indicators. Output and outcome indicators are also used. Zichtbare Zorg and KiesBeter call these indicators results indicators. These results indicators can also be either on the care-content (zorginhoudelijk) level or on the client experience level. According to the product manager of KiesBeter the emphasis is more and more on the experiences of clients.

4.2.3 Use of the benchmark information by the user group

Clients

(38)

38 The reasons for the negative attitude of the clients were as follows. First of all, on some care products (maternity care) there were way too many indicators so the client had difficulties with filtering the information to the indicators they were interested in. For another care product (general practitioners) there were no indicators used the client was interested in; the information that was delivered was seen as useless. For hospitals however, the information was seen as interesting and useful by the client although he had again difficulties with filtering the enormous amount of indicators. Besides the amount of information and the indicators that are used, the communication and presentation of the information also delivers some problems. The website was unfamiliar to the clients, and when they eventually saw the website they thought the website was inconvenient and difficult to work with. They were, however, enthusiastic on the use of stars in the presentation on the performance of a care institution on a certain indicator.

4.3 Case study 3: KNMP

4.3.1 The benchmark

The benchmark KNMP is a benchmark on the performance information of pharmacists in the Netherlands. KNMP is a professional association of pharmacists and performs the benchmark.

The user group in the benchmarking of KNMP is changing. Until now, the primary user group is management because the results of the benchmark were only publicized to the pharmacists themselves; the benchmark was used as an internal management tool to compare and improve from learning from others. The inspection also makes use of the performance information; they do however not make use of the benchmark. From now on, however, Zichtbare Zorg wants to publicize the indicators on the KiesBeter website. This means that two other user groups will be added to this benchmark; statutory bodies and clients. The added user groups are discussed in appendix B (Case reports) because this is information about what is expected in the future by the researcher of KNMP and the pharmacist.

4.3.2 The performance indicators

(39)

39 indicators belong to the output and outcome type of indicators. Most results indicators are aimed at output types of indicators rather than outcome types of indicators; the focus is on the number of users of a certain medicine instead of the effects of the medicine on the well-being of the patient.

4.3.3 Use of the benchmark information by the user group

Management

The KNMP researcher stated that management (the pharmacists) made use of the benchmark information because she had close contact with the field and she received that information; exact information on the use of the information did however not exist. The pharmacist confirmed this but he had some doubts on the benchmark and its usefulness. According to the two interviewees, management was interested in input, throughput, and output indicators. The KNMP researcher and the pharmacist were convinced that the set of indicators was way too big; a smaller set is needed. At the moment there are 54 indicators (and 40 of them are invalid). The content of the indicators is also questioned by the pharmacist. He stated that there are a lot of pharmacists, including him, that have the opinion that the indicators are quite senseless due to the dichotomous nature of many indicators. The communication and presentation of the information is experienced as very useful by management. The digital platform is accessible and the information is delivered timely.

4.4 Case study 4: Benchmark WWB in municipalities

4.4.1 The benchmark

(40)

40

4.4.2 The performance indicators

The SGBO researcher states: “The benchmark WWB is a quite quantitative benchmark, it is about the

figures”. There is a switch from policy effort indicators to policy effect indicators: “We do measure a lot of results and effects. They are relatively easy to measure.” The benchmark consists of 26 questions

(one question can consist of multiple indicators). The indicators are subdivided in four themes: clients, finances, enforcement, and operational management. In the benchmark report we can see that this is true; almost all information is numeric. All types of indicators are used (input, throughput, output, outcome).

4.4.3 Use of the benchmark information by the user group

SGBO does not have concrete information on the use of the benchmark information by management of the municipalities but due to their close interaction with the participating municipalities they have some knowledge of it and they state that management does make use of the information to compare with others and thereby improve themselves. Of course, the extent of using the information differs per municipality.

Management

According to the SGBO researcher, the amount of information is manageable for the municipality. The indicators are developed in consideration with the municipalities and their principle is: “One indicator

added is one indicator scratched.” The difficulty what concerns the content of the indicators is that

management is still aimed at the input and throughput indicators (efforts) rather than output/outcome indicators (effects). SGBO tries to point at the growing importance of measuring the effects. The WWB is less complex than the WMO, which means the indicators are easier to develop and to measure. Despite that, due to definition differences comparability is sometimes seen as problematic. With communicating the information, SGBO organizes learning cycles. It differs per municipality if they join the learning cycles actively and thereby make use of the information. It is important that the higher management of the municipality is involved according to the SGBO researcher: “Higher people from the organization have to

create the consciousness that the benchmark information is important.” As the SGBO researcher stated,

Referenties

GERELATEERDE DOCUMENTEN

The figures in the left columns indicate that citizen groups are more likely to inform policymakers about public prefer- ences: While 64% provide this information, approximately 44%

Hierbij kan worden gedacht aan bestuursorganen, bekostigingsinstanties en toezichthouders (verg. Omdat deze belanghebbenden beter in staat zijn de benchmar- kingscores te

Uit het onderzoek komt naar voren dat ener- zijds een verbetering van het gemiddelde prestatieniveau van enkele belangrijke indicatoren heeft plaatsgevonden en dat de waterschappen

When selecting the proper performance indicators for each of the different levels of management, a Balanced Scorecard approach will ensure that the indicators

However, a stronger link between incentives, which is also a solution for agency problems (Eisenhardt, 1989), and benchmarking scores will neither induce the agent to increase

In the case of competitive tendering this implies that by using benchmarking the principal is able to partly compensate his loss of control over the public service, as he keeps

As explained, blind identification of the system amounts to the computation of a Hankel-structured decomposition of a tensor T = JG, H, H, HK with H as depicted in Figure 1. First,

Figure 4.15 illustrates SATHD results for EPRI’s DPQ project while Figure 4.16 shows the results for this dissertation.. The results compare