• No results found

Causes and Effects of Perception Gaps

N/A
N/A
Protected

Academic year: 2021

Share "Causes and Effects of Perception Gaps"

Copied!
283
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Master Thesis

Causes and Effects of Perception Gaps

A research on causes of perception gaps on performances

in a dyadic service exchange between physicians and patients

and the effects of these perception gaps on satisfaction

(2)

1

Preface

“Sin en wille kinne folle tille!”. Freely translating this Frisian expression into English: “enthusiasm and fun can get things done!”. Three years of studying on the university is not easy. It takes a lot of time and one has to shift priorities from meeting friends and family to spending time in the study room. Especially when this has to be done in parallel with a fulltime job. But as far as I am concerned, these inconveniences were fully rewarded. Studying rewarded me with meeting new people, experiencing the humour of students again during lectures (even though these students were already a bit matured), involvement into passionate discussions with lecturers, experiencing a different culture during a trip to China, but above all… gaining knowledge and obtaining academic competences for a future career. These are examples of enthusiasm and pleasure that surpassed the “burden”. This Thesis is the final masterpiece of this study. I searched for a subject that could serve two purposes. First, is should be of practical use for the hospital. Second, it would allow me to experience a full research trail that (hopefully) results into a publication. This subject serves both and concerns two disciplines I liked the most: operations management and marketing.

(3)

2

Summary

Keeping patients satisfied is essential for hospitals to inspire patients to come back and to attract new patient. Satisfaction is caused when experienced performance is not conform patient’s requirements. In a dyadic environment this performance gap is not always visible by physicians due to different perceptions on performance. Research on perception differences has been conducted in order to explore the impacts on (patient)satisfaction. This research is an attempt to contribute to scientific knowledge by: (1) addressing validity problems experienced in earlier research, by making a trade-off on importance of performance categories explicit and collecting generalized perceptions of physicians on patient group level, (2) introducing inter patient/physician gaps on relative importance as a new variable on which correlations can be researched and (3) exploring the relation of (speculated) causes for inter patient/physician perception gaps.

Propositions have been formulated in order to (a) explore the relation between perception gaps in required, experienced and relative importance of performance and patient’s evaluation of a service exchange in an outpatient setting, (b) explore causes of perception gaps between patients and physicians. Researched causes were absolute age, gender and education differences and the frequency of earlier appointments concerning the health care demand at hand.

The research was conducted in a Dutch hospital in Drachten starting from the 16th of April until the 18th of May 2012. Perceptions of 15 physicians and 959 patients within the specialties Urology, Cardiology, Oral Surgery, Paediatrics and Orthopaedics were measured. This research differed from other gap research as gaps were defined from an operations strategy perspective. Gaps were measured for tangibles, assurance, empathy, speed, dependability and flexibility performance items. Making gap analysis useful for building an operations strategy, not only Likert-scales, but also time based performance scales were used.

(4)

3

Contents

Preface ... 1 Summary ... 2 Introduction ... 9 Literature review ... 11

Satisfaction and service quality ... 11

Conclusions on satisfaction and service quality ... 14

Perception gaps ... 15

Gap models ... 15

Causes of perception gaps between physicians and patients ... 19

Conclusions on perception gaps ... 20

Evaluating Hospital Operations ... 21

Performance dimensions ... 21 Quality ... 22 Speed ... 23 Dependability ... 24 Flexibility... 25 Cost ... 26

Measuring experienced, required performance and relative importance ... 26

Conclusions on evaluating hospital operations ... 28

(5)

4

Results ... 43

Missing Value Analysis ... 43

Mean Gap Sizes ... 44

Correlations between Frequency of Encounters and closing Inter Patient/Physician Gaps ... 54

Correlations between Socio Demographic Differences and Inter Patient/Physician Gaps ... 57

Correlations between Performance Gaps and Satisfaction ... 62

Multivariate Regression explaining Satisfaction by Intra Patient Requirement/Experience Gaps ... 72

Summary of the Results ... 78

Discussion ... 80

Reflections on Results ... 80

Limitations ... 84

Suggestions for further Research ... 85

Conclusions ... 87

(6)

5

Appendix I Research instrument ... 95

Appendix II Kolmogorov-Smirnov Test for normality ... 112

K-S Test Results for Intra Patient Gaps ... 112

K-S Test Results for Inter Patient/Physician Gaps: Requirements ... 116

K-S Test Results for Inter Patient/Physician Gaps: Fixed Sum ... 119

K-S Test Results for Inter Patient/Physician Gaps: Experiences ... 121

Appendix III Gap Sizes ... 125

Intra Patient Gap Sizes ... 125

Specialty = Overall ... 125

Specialty = Cardiology ... 128

Specialty = Paediatrics ... 131

Specialty = Oral Surgery ... 134

Specialty = Orthopaedics ... 138

Specialty = Urology ... 141

Inter Patient/Physician Gaps Sizes on Importance ... 144

Specialty = Overall ... 144

Specialty = Cardiology ... 145

Specialty = Paediatrics ... 146

Specialty = Oral Surgery ... 147

Specialty = Orthopaedics ... 148

Specialty = Urology ... 149

Inter Patient/Physician Gap Sizes on Requirements ... 150

Specialty = Overall ... 150

Specialty = Cardiology ... 153

Specialty = Paediatrics ... 156

Specialty = Oral Surgery ... 159

Specialty = Orthopaedics ... 162

(7)

6

Inter Patient/Physician Gap Sizes on Experiences ... 168

Specialty = Overall ... 168

Specialty = Cardiology ... 171

Specialty = Paediatrics ... 174

Specialty = Oral Surgery ... 177

Specialty = Orthopaedics ... 180

Specialty = Urology ... 183

Appendix IV Mean Requirement Scores Patients ... 186

Specialty = Overall ... 186

Specialty = Cardiology ... 187

Specialty = Paediatrics ... 188

Specialty = Oral Surgery ... 189

Specialty = Orthopaedics ... 190

Specialty = Urology ... 191

Appendix V Mean Importance Scores Patients ... 192

Specialty = Overall ... 192

Specialty = Cardiology ... 192

Specialty = Paediatrics ... 193

Specialty = Oral Surgery ... 193

Specialty = Orthopaedics ... 194

(8)

7

Appendix VI Correlation Tests ... 195

Spearman’s Rho between Frequency and Inter Patient/Physician Importance Gaps ... 195

Speaman’s Rho between Frequency and Inter Patient/Physician Requirement Gaps ... 196

Spearman’s Correlations between Socio Demographics and Inter Patient/Physician Requirement Gaps ... 199

Spearman’s Rho between Socio Demographic Differences and Inter Patient/Physician Importance Gaps ... 199

Spearman’s Rho between Socio Demographic Differences and Inter Patient/Physician Requirement Gaps ... 201

Pearson’s Correlations between Inter Patient/Physician Importance Gaps and Satisfaction ... 205

Specialty = Overall ... 205

Specialty = Cardiology ... 206

Specialty = Paediatrics ... 207

Specialty = Oral Surgery ... 208

Specialty = Orthopaedics ... 209

Specialty = Urology ... 210

Pearson’s Correlations between Inter Patient/Physician Requirement Gaps and Satisfaction ... 211

Specialty = Overall ... 211

Specialty = Cardiology ... 214

Specialty = Paediatrics ... 217

Specialty = Oral Surgery ... 220

Specialty = Orthopaedics ... 223

Specialty = Urology ... 226

Pearson’s Correlations between Inter Patient/Physician Experience Gaps and Satisfaction ... 229

Specialty = Overall ... 229

Specialty = Cardiology ... 232

Specialty = Paediatrics ... 235

(9)

8

Specialty = Orthopaedics ... 241

Specialty = Urology ... 244

Pearson’s Correlations between Intra Patient Gaps and Satisfaction ... 247

Specialty = Overall ... 247

Specialty = Cardiology ... 250

Specialty = Paediatrics ... 253

Specialty = Oral Surgery ... 256

Specialty = Orthopaedics ... 259

Specialty = Urology ... 262

Appendix VII Mann-Whitney Tests for Gender Gap Sizes ... 265

Mann-Whitney Test on Gender Gaps and Absolute Importance Gaps ... 265

Mann-Whitney Test on Gender Gaps and Absolute Required Performance Gaps ... 266

Appendix VIII Regression Results explaining Satisfaction by Intra Patient Requirement/Experience Gaps ... 270

Specialty = Overall ... 270

Specialty = Cardiology ... 272

Specialty = Paediatrics ... 274

Specialty = Oral Surgery ... 276

Specialty = Orthopaedics ... 278

(10)

9

Introduction

Today Dutch hospitals operate in an environment where it is of strategic importance to inspire patients to come back and to attract new patients. Not attracting sufficient numbers of patients will put pressure on the survival of hospitals. A recent example is the Dutch hospital in Dokkum that struggled for survival. This hospital could not meet the required volumes for efficient use of hospital resources, like a 24-hour Emergency Department or Maternity Care, making these services too expensive. It also could not meet the required treatment volumes for patients with breast cancer. As a result, insurance companies where reluctant to contract this hospital for providing this treatment to their policyholders. These volume requirements are introduced to guarantee the quality of skills needed for complex and risky treatments that do not occur regularly.

As found by Bearden and Teel (1983) keeping current patients (consumer loyalty) is a direct result of consumer’s satisfaction. Satisfaction is, inter alia, determined by a disconfirmation between expectations and outcome (Oliver, 1979; Lewis and Booms, 1983; Smith and Houston, 1982; Gronroos, 1982). A proper operations strategy is aiming at meeting these expectations by reconciling market needs with operational capabilities (Slack and Lewis, 2008: 7 - 17). An operations strategy attempts to influence the way it satisfies market requirements by setting appropriate performance objectives. Decisions concerning operations strategy areas - such as capacity, supply networks, process technology, development and organisation - should be congruent with these required performance objectives. Slack and Lewis call this a close “fit” (2008: 228). To realize this “fit” the hospital must have a clear picture of the required performance objectives and should make operations decisions in such a way that these objectives can be met.

An operations strategy based on wrong perceptions of patients’ performance requirements, could lead to faulty decisions in operation decision areas. This could cause meeting the wrong performance requirements and therefore lead to dissatisfied patients. Also differences in perception of realized performance could make a physician think that he is still meeting performance requirements of the patient, while the patient is evaluating this negatively.

(11)

10

researched and (3) exploring the relation of (speculated) causes for inter patient/physician perception gaps.

First, this research is an attempt to solve validity problems. Researchers experienced validity problems caused by the research instrument. Respondents scored high on almost every expected performance levels, making researchers think that patients have difficulties in making trade-offs on expected performance levels. This is influencing content validity and even impedes testing hypothesis about expected performance levels (Brown and Swartz, 1989; Vandamme and Leunis, 1992: 45). Validity problems could also be caused by the method of data collection. Much research was conducted on the specialty level and not on the level of care demand. One can expect that within a speciality, different patient groups exists with homogenous care demands for different types of service encounters (for instance diagnose, treatment or follow-up consultations). Also patients can differ in time based performance requirements due to urgency. These patient groups can be seen as different customer groups with different performance requirements and therefore with different priorities for the performance (Slack and Lewis, 2008: 44). Without taking this into account, heterogeneity in performance requirements can influence results when research is conducted on a specialty level only.

Second, not much research can be found addressing gaps between physicians and patients regarding the relative importance of performance dimensions. The relative importance of performance dimensions are at least as important to understand as individual patient expectation levels when evaluating a service encounter (Carman, 1990: 49; Dean, 1999; Slack and Lewis, 2008: 42-44).

Third, as far as we know, very few researches have been conducted aiming to find factors that can predict the probability of (inter patient/physician) gaps in required, experienced and relative importance of performance in a hospital environment.

Three propositions are formulated:

1. Intra patient gap sizes on required and experienced performance are related to patient satisfaction.

2. Inter patient/physician gap sizes on required, experienced and relative importance of performance are related to patient satisfaction.

(12)

11

Literature review

Satisfaction and service quality

In marketing literature, satisfaction and service quality are mostly mentioned for evaluating service performance. There are several definitions of satisfaction. Howard and Sheth (1969: 145) define satisfaction as: “the buyer’s cognitive state of being adequately or inadequately rewarded for the sacrifice he has undergone.” Smith and Houston (1982) defined satisfaction with service delivery as the extent to which expectations were consolidated. Satisfaction and dissatisfaction can be seen as two sides on a continuum. The position on this continuum is determined by the comparison between expectations and outcome (Oliver, 1979; Lewis and Booms, 1983; Gronroos, 1982). Exceeding expectations will lead to higher satisfaction. Dissatisfaction occurs when a negative discrepancy exists between the customer’s expected outcome and the actual outcome.

Teas (1993: 19) mentions that two different definitions of expectations are used in researching the disconformance between expectations and experiences. In consumer satisfaction research, expectations are defined as predictions of what a service provider would offer. In Service Quality research, expectations are defined as normative expectations of what a service should offer (see Parasuraman, Zeithaml and Berry (1991: 422). In most of the service quality literature the normative expectations are used (see Brown and Swartz; 1989; Carman; 1990). Therefore similarity exists between “normative expectations” and “performance requirements” as used in operations management literature (see Slack and Lewis, 2008).

Previous experiences in service delivery can influence the evaluation of a service. This evaluation can be seen as a function of expectations based on previous experiences which are benchmarked with actual performance (Woodruff, Cadotte and Jenkins, 1983; John, 1991; Carman, 1990).

Parasuraman, Zeithaml and Berry (1985: 44) adds “word of mouth communication” as another variable determining expectations, next to “personal needs” and “past experience”. Their hypothesis is based on Lehtinen and Lehtinen (1982) who introduced “interactive quality”, derived from the interaction between contact personnel and customers as well as interaction between some customers and other customers (word of mouth). Also Brown (1989: 92) found that expectations may be based on information of previous experiences of someone else.

(13)

12

important than the art on the hospital room wall” (Carman, 1990). Relative importance of performance dimensions gives weight to expectations and therefore has a moderating effect on the evaluation of the service encounter.

The evaluation of a service encounter ( ) can be summarized in adapted formulae based on Brown (1989: 93):

∑ and

where:

= evaluation outcome for encounter i = importance of expectation for encounter i = expectations for encounter i

= experiences for encounter i, and = experiences prior to encounter i

= word of mouth communications prior to encounter i = personal needs for the encounter i

The evaluation of a service is not only restricted to the outcome of a service. Gronroos (1982) argues that expectations and experiences can be compared on (a) the outcome of a service or (b) the

process of a service delivery. Gronroos calls this respectively: technical and functional quality.

(14)

13

Leunis, 1992: 33; Brown and Swartz, 1989; 93; Oliver, 1981: 42). Parasuraman et. al. (1985) state that service quality as perceived by a costumer is an overall evaluation similar to an attitude. This can be illustrated by Lewis and Booms (1983), who state that “delivering quality service means conforming to customer expectations on a consistent basis”. In other words: to deliver service quality, customer’s expectations have to be met during all the service encounters (see also: Smith and Houston, 1982: 59-62). This means that the evaluation of service quality is based on consistency of

high satisfaction levels during several service encounters. The notion that service quality is

concerning an overall judgement, can also be illustrated by Gronroos (1982) who assumes that corporate image is an aspect of the service quality construct. Image is based on experiences during several service encounters. Image can have a mediating role on satisfaction if expectations are not met during that encounter.

Vandamme and Leunis (1992: 34) mentioned that assessment of the outcome or the delivery process of a service is typically more transaction orientated in health care services, than in other services. They summarized some reasons for this transaction-specific orientation:

1. The experience a patient has with a certain hospital is often limited to a single visit or stay.

2. Patients differ in physical condition in addition to different demographic, socio economic and psychological backgrounds making comparisons across time and patients almost impossible.

(15)

14

Conclusions on satisfaction and service quality

(16)

15

Perception gaps

Gap models

There are several models available indicating a relationship between perception gaps and the evaluation of a service. Most of the gap models are rooted from the Service Quality Model provided by Parasuraman, Zeithaml and Berry (1985: 44). Parasuraman et al. identify 5 gaps:

Gap 1: the consumer expectation / management perception gap – service firms executives may not always understand what features connote high quality to consumers in advance, what features a service must have in order to meet consumer needs, and what levels of performance on those features are needed to deliver high quality service,

Gap 2: the management perception / service quality specification gap – a variety of factors like resource constraints, market conditions, and/or management indifference may result in a discrepancy between management perceptions of customer expectations and the actual specifications established for a service ,

Gap 3: the service quality specification / service delivery gap – even when guidelines exist for performing services well and treating consumers correctly, high quality service performance may not be a certain,

Gap 4: service delivery / external communications gap – discrepancies between service delivery and external communication (in the form of exaggerated promises and/or the absence of information about service delivery aspects intended to serve consumers well) can affect consumer perceptions of service quality,

Gap 5: expected service / perceived service gap - judgements of high and low service quality depend on how consumers perceive the actual service performance in the context of what they expected.

(17)

16

Word of Mouth

Communications Personal Needs Past Experience

Expected Service

Percieved Service

Service Delivery (incL. pre- and post-

contact) Transition of Perceptions Into Service Quality Specs. Management Perceptions of Consumer Expectations External Communications to Consumers GAP1 GAP2 GAP3 GAP4 GAP5 CONSUMER MARKETER

Figure 1 Service Quality Model (Parasuraman, Zeithaml and Berry, 1985: 44)

(18)

17

Mismatch 1 the discrepancy between supplier’s perception of requirements and customer’s perception of requirements, addressed by Parasuraman et al. (1985) as gap 1

Mismatch 2 the discrepancy between supplier’s perception of performance and customer’s perception of performance, not directly addressed as a gap in the model of Parasuraman et al. but could be partly explained by gap 4 from the Service Quality Model. (Parasuraman et al. 1985)

Mismatch 3 the discrepancy between customer’s perception of requirements and customer’s perception of performance, addressed by Parasuraman et al. (1985) as gap 5

Mismatch 4 the discrepancy between supplier’s perception of requirements and supplier’s perception of performance, addressed by Parasuraman et al. (1985) as gap 2 and 3

Supplier’s perception of requirements Customer’s perception of requirements Customer’s perception of performance Supplier’s perception of performance Mismatch 1 Mismatch 2 Customer

Supplier Mismatch 4 Mismatch 3

Figure 2 Mismatch tool (Harland, 1996: 72

It seems that the model of Harland is encompassing most of the Service Quality Model in a comprehensive way, showing clearly (a) an intra-customer gap, (b) an inter-customer-supplier gap on the perception of performance requirements as well as experienced performance and (c) an intra-supplier gap.

(19)

18

requirements, while the customer thinks he is not. In this example the supplier will not recognize a dissatisfied customer and therefore an improvement in operations strategy will not be triggered.

Mismatch 1 and 2 can not only impede a necessary change in operations. It can also provoke an unnecessary change in operations strategy. For example: a supplier and a customer both have the same requirement for a lead time, say 2 weeks. The supplier thinks he is dissatisfying a customer, because his perception on delivered performance is 4 weeks. The customer perceives it as 2 weeks. Mismatch 2 occurs. The supplier will be triggered to initiate unnecessary operations improvements, even though there are no dissatisfied customers.

One can conclude that an operations improvement is not always causing a direct effect on patient satisfaction and vice versa. The mismatch triggering an operations improvement (mismatch 4) therefore has an indirect effect on customer satisfaction, because it is mediated by possible inter-customer-supplier gaps.

Brown and Swartz (1989: 93) performed a gap analysis for their research in primary health care. They defined three gaps derived from the Service Quality Model of Parasuraman et al.:

Gap 1 = client expectations – client experiences (Harland’s mismatch 3),

Gap 2 = client expectations – professional perceptions of client expectations (Harland’s mismatch 1) Gap 3 = client experiences – professional perceptions of client experiences (Harland’s mismatch 2)

(20)

19

Silvestro (2005) also conducted a gap analysis in a health care environment (Breast Screening Unit) with a small sample size of 32 patients. Silvestro did not only research the gap between patients and physicians, but between patients and nursing staff and management as well. Just like the results of Brown and Swartz, the expectation/priority factors were scored very high by the respondents.

Causes of perception gaps between physicians and patients

Not much literature can be found wherein causes of inter patient physician gaps are researched. O’Connor, Shewchuck and Carney (1994: 37) give some indications why patient/ physician gaps concerning expectations can exist. In general, expectations about tangibles (like physical facilities, equipment and appearance of personnel) are overestimated by the health care suppliers and assurance, responsiveness and reliability and empathy are underestimated. They found that physicians have more difficulties in estimating patient’s expectations on the dimensions empathy and responsiveness then health care administrators or nursing staff. On tangibles, assurance and reliability no differences exist.

O’Connor et al. (1994: 35) argue that these difficulties arise because doctors usually are concerned more with the technical vs. functional aspects of quality, are often time constrained and harried, are occasionally arrogant, and tend to differ from the general population in terms of income, education, and other demographics. Research from Siminoff, Graham and Gordon (2006) shows that education level seems to influence the communication between the physician and the patient. Patients that are more educated experience fewer problems in the communication with the physician. Therefore one can speculate that similar education levels are reducing the chance of a inter patient/physician gap due to communication problems. On the other hand, Hall and Dornan (1990) show that patients that are less educated are generally more satisfied with received care, indicating more tolerance when a inter patient/physician gap exists. This suggests that lower education has a moderating effect on the relation between the inter patient/physician gap and satisfaction.

(21)

20

perception on patient’s expectations. Gronroos (1990: 55) argues that direct interaction with consumers allows the provider to "see and feel the signals," contributing to a deeper understanding of consumer expectations than could be obtained in the absence of that interaction. Therefore O’Connor, Trinh and Shewchuk (2000) speculate that a lack of service encounter experience inhibits building a clear understanding of what patients expect. These altering expectations on the side of the patient and physician could lead to closing the gap on expectations.

Conclusions on perception gaps

It can be concluded that from operations strategy perspective there have been attempts to conduct a gap analysis in a health care environment. They are mostly focussed on the customer’s side during a dyadic service exchange, focussing on mismatch 2, 3 and 4 from the Harland model. The validity of these researches is sometimes questioned due to weak internal consistency in expectations measures.

(22)

21

Evaluating Hospital Operations

Performance dimensions

As stated earlier, an operations strategy attempts to influence the way it satisfies market requirements by setting appropriate performance objectives (Slack and Lewis. 2008). Therefore one has to determine which measures are appropriate for measuring required and delivered performance. Slack and Lewis (2008: 36) provided a list of five generic performance objectives: (1) quality – doing things right, (2) speed – doing things fast, (3) dependability – doing things on time, (4) flexibility – ability to change the operations (5) cost – doing things cheaply.

Slack and Lewis (2008: 42) stress the importance of examining each performance objective in terms of how they affect market position outside the operation and resources inside the operation. Consequences of excellent performance outside the operation are specific and directly visible to the customer. Performance objectives inside the operation are more interdependent and not directly visible to the customer.

These five performance objectives can have slightly different meanings depending on how they are interpreted in different operations (Slack and Lewis 2008: 37; Slack, Chambers and Johnston, 2004: 45). Therefore care should be taken when evaluating hospital service operations on these dimensions. The special characteristics of service operations - like hospital operations compared to producing goods - should be taken into account. (Parasuraman et al. 1985: 42; Zeithaml, 1981; Gronroos, 1982; Vandamme and Leunis, 1991: 31). Fitzsimmons and Fitzsimmons (2008: 18-21) provided a description of typical service characteristics in general: (1 – co-creation) the customer can play an active part in the process, (2 - simultaneity) services are created and consumed simultaneously and thus cannot be stored, (3 - perishability) a service is lost forever when not used, because it cannot be stored, (4 - intangibility) services are ideas and concepts and therefore intangible, (5 - heterogeneity) the intangible nature of services and the customer as a participant in the service delivery results in variation of service from customer to customer.

(23)

22

“qualities” that can influence the objective evaluation of a service: (1) search qualities – which can be easily evaluated before the purchase of a service, (2) experience qualities – which can only be inferred during or after the consumption process and (3) credence qualities – which qualities can never be evaluated even after consumption has taken place (Nelson, 1970). Vandamme and Leunis (1992) state that credence qualities are high in health care services as opposed to many other services. In such a situation, affective judgement will dominate the evaluation process instead of cognitive judgement.

Even at the lowest aggregation level of services, within a health care environment, quality dimensions can differ in an inpatient and outpatient setting (Vandamme and Leunis, 1992), but also within outpatient settings (Dean, 1999). For example: life threatening circumstances can cause a preference for short access times, while more elective demand for health care could invoke a preference for flexible appointment possibilities that suits the patient. As O’Connor, Trinh and Shewchuk (2000) state: “It would be beneficial to evaluate the degree to which various provider groups understand patient expectations on quality dimensions that are highly specific to various health services situations. For example, some may contend that when people have serious health conditions, they are more likely to be concerned with the technical/clinical aspects of care and less concerned with the way that technical care is delivered. Konner (1987) states that "for the treatment of most illnesses, for which technical knowledge and prowess make a difference, we seem to prefer a cold or even disturbed physician with full command of current medical science, to the most sensitive and compassionate bumbler."

Quality

Where in marketing literature “(service) quality” is used as a measure for overall service performance, Slack and Lewis (2008: 37) define quality as an appropriate specification of a service. Therefore - from an operations strategy perspective - the definition of “quality” is more specific than in marketing literature. It deals with the specification level of a service operation as well as the conformance to that specification (“fit for purpose”). Slack and Lewis state that the specification of the quality dimension is a multi-dimensional issue, consisting of hard and soft quality aspects. Hard quality aspects are concerned with evident and largely objective aspects of the product or service. Soft quality aspects are associated with personal interaction between customers and the product or service.

(24)

23

perceived quality. Parasuraman et al. (1988: 23; 1991) developed the SERVQUAL-instrument for measuring a service in general, wherein five dimensions can be measured on 22-item scales. These dimensions are: (1) tangibles – physical facilities, equipment and appearance of personnel, (2) reliability – ability to perform the promised service dependably and accurately, (3) responsiveness – willingness to help customers and provide prompt service, (4) assurance – knowledge and courtesy of employees and their ability to inspire trust and confidence, (5) empathy – caring, individualized attention the firm provides its customers. Within SERVQUAL, questions are formulated as statements about the service on which the level of agreement is measured. Therefore SERVQUAL is useful for measuring performance items which are difficult to quantify, such as quality performances.

It is important to notice the differences between the performance dimensions of Slack and Lewis and SERVQUAL. In a SERVQUAL-environment “service quality” is encompassing the overall performance of a service operation, while Slack and Lewis (2004, 2008) sees “quality” as one of the five performance dimensions at a much lower aggregation level. Slack and Lewis see reliability (from the SERVQUAL-instrument) as a dependability dimension. Also Slack et al. (2004, 2008) regard time convenience items – as placed on the responsiveness dimension in SERVQUAL – as items of the speed dimension. Tangibles from the SERVQUAL can be seen as “hard” quality aspects and assurance and empathy as “soft” quality aspects within the quality dimension of Slack and Lewis.

It can be concluded that for measuring the quality dimension, some aspects of SERVQUAL can be useful for measuring the quality dimension of Slack and Lewis. SERVQUAL offers a measurement procedure for quality items that are difficult to quantify. Also it offers some dimensions which Slack and Lewis consider soft dimensions (assurance and empathy) and hard dimensions (tangibles) of quality.

Speed

(25)

24

and run times (like consultation time) are typically experienced by the patient and can be easily judged for external benefit.

Research shows that waiting times are important factors in evaluating a service encounter (Varkevisser, 2009, De Bas, Van der Lijn and Meijer, 2003; Sivey, 2011). Sivey (2011) states that two types of waiting time affect patients. The outpatient waiting time is the wait between the referral from the GP and the outpatient appointment with the specialist. In a Dutch hospital environment this waiting time is also called “access time” (NZA, 2010). The inpatient waiting time is the wait from the outpatient appointment (and decision to admit the patient for clinical treatment or care) and the actual date of the inpatient admission. The precision of measuring waiting times can differ. Most of the time it is measured in weeks/days when it is concerning a scheduled appointment for consultations. In cases of walk-in appointments - wherein the order acceptance / patient admittance takes place just before the start of the service delivery process - waiting time is mostly measured in hours/minutes (for an example see: Nederlandse Vereniging Spoedeisende Hulp Verpleegkundigen, 2005).

Three conclusions can be drawn about the speed dimension. First, waiting times and the duration of a consultation or treatment can be experienced by patients. The speed of these process steps can be easily judged by them. Second, in the Netherlands the wording used for queuing depends on an inpatient or outpatient setting. Queue time for scheduled outpatient appointments is called “access time”. Waiting for walk-in appointments or inpatient appointments are called “waiting time”. Third, the precision of the measurement scales for (a) “access time” and (b) “waiting time” in an inpatient context is much lower than for waiting time for walk-in appointments.

Dependability

Slack and Lewis (2008: 38) define dependability as keeping delivery promises, according to the following formula:

dependability = due delivery time – actual delivery time

(26)

25

delayed appointment (dependability issue) and time waiting for service admittance in a walk-in situation (speed issue) are both called “waiting time” (for examples see: Pinkster and Wagter, 2007 and Gooren and Lemstra, 2005).

We conclude that care has to be taken for the level of precision, when measuring dependability. Widely used terms like “waiting time” in a waiting room, has to be seen within the specific context of a planned or walk-in appointment. To make sure that speed or dependability performance is measured, the generally used definition of “waiting time” should be refined and divided into “waiting time” for walk-in appointments (= queue time as a measure of speed) and “delay” for planned appointments (= as measure of dependability).

Flexibility

Slack and Lewis (2008: 40) define flexibility as the ability to change the operation. They make a distinction in (1) response flexibility – how much the operation can be changed and (2) range flexibility – how fast the operation can be changed. Total operations flexibility can consist of product/service flexibility, mix flexibility, volume flexibility and delivery flexibility. Slack et al. (2004: 52) give some examples of the flexibility dimension in and hospital environment. Definitions within the flexibility dimension are summarized in table 1.

Total operations flexibility Range flexibility Response flexibility Product / service flexibility

(development and introduction of new types of treatments)

The range of products and services which the company has the design, purchasing and operations capability to produce.

The time necessary to develop or modify the products or services and processes which produce them to the point where regular production can start.

Mix flexibility

(range and time needed to adjust available treatments)

The range of products and services which the company produces within a given time period.

The time necessary to adjust the mix of products and services being produced.

Volume flexibility

(ability and time needed to adjust the number of patients treated)

The absolute level of aggregated output which the company can achieve for a given product or service mix.

The time taken to change the aggregated level of output.

Delivery flexibility

(ability and time needed to reschedule appointments)

The extent to which delivery dates can be brought forward.

The time taken to reorganise the operation so as to replan for the new delivery date.

(27)

26

Cost

Slack and Lewis (2008: 41) define cost as the financial input to the operation that enables it to produce its products and services. The external benefit of cost performance are low prices. Varkevisser (2009: 71) states that in the Netherlands, only 5% of the patient population is price sensitive, because the costs are not clearly visible or relevant for the patient. Most of the health care demand is insured.

Measuring experienced, required performance and relative importance

As stated earlier, the service evaluation or satisfaction is determined by the comparison between expectations ( ) and experiences ( ) and the relative importance ( ) of expectations.

Literature shows that there are different ways in which expectations and experiences can be measured. For performance items that are difficult to quantify, Likert-scales have been used for measuring experienced performance (For instance: Vandamme and Leunis, 1992; Brown and Swartz, 1989; Parasuraman et al. 1988; Carman, 1990; Babakus and Mongold, 1992; Silvestro, 2005). Questions are formulated in statements about the service on which the level of agreement is measured. The range of the Likert-scales vary form 5-points to 7-points (1 = strongly disagree – 5/7 = strongly agree). For items that are easier to quantify (like time convenience items or costs) absolute time or value measures are used as well as Likert-scales and statements.

(28)

27

second by scoring priorities on a Likert-scale (Silvestro, 2005) and third by using a fixed sum score to indicate relative performance differences (Parasuraman et al., 1991).

Parasuraman et al. (1988: 31) conducted a regression analysis based on scores of absolute performance levels by a bank, a credit-card company, a firm offering appliance repair and maintenance services and a long-distance telephone company. They found that reliability was consistently the most critical dimension, assurance the second most and empathy the least important dimension in all four cases. In contradiction with other business environments, empathy seems to play an important role in performance evaluation in a health care environment. In research of Miaoulis, Gutman and Snow (2009), it is shown that the vast majority of patients first seek to have their negative emotional state addressed and then to receive medical care. This illustrates again that dimensions used for measuring service quality should be tested in their own business environment.

Brown and Swartz (1989) developed a measurement scale based on past research in the medical area and observations of medical professionals. They used 65 statements regarding expectations and experiences were individually scored on a 5-point Likert-scale (1 strongly disagree – 5 strongly agree). They used regression for revealing the impact of gaps on the service evaluation in a dyadic service exchange, instead of just revealing the impact of an attribute on the service evaluation. After regression analysis it was shown that physician interaction (measured as an experience gap) is by far the most important factor determining the overall evaluation of the service. In general, this regression analysis also shows that experience gaps have (by far) a bigger impact on the service evaluation than expectation gaps (Brown and Swartz, 1989: 96).

Vandamme and Leunis (1992: 44) related the relative importance of the dimensions to service quality (as an overall judgement) and satisfaction (as a judgement of the specific service encounter). It showed that in inpatient settings tangibles, assurance and nursing staff were the most important dimensions explaining service quality and satisfaction. Medical responsiveness and personal beliefs and values explain only a minor part, partly due to the fact that patient experience difficulties in evaluating these two service dimensions.

(29)

28

Also Parasuraman et al. (1991: 424) argue that, next to deriving the relative importance indirectly, “… direct measures of the importance of various service attributes are also useful, particularly for combining individual attribute ratings to obtain a composite, weighted estimate of overall service quality”. Therefore Parasuraman et al. use a fixed sum score wherein the relative importance of the five dimensions was measured. They asked respondents to allocate a total of 100 points across the dimensions according to how important they considered each to be. The fixed sum scores were measured on the dimensional level instead of the attribute level.

Conclusions on evaluating hospital operations

Slack and Lewis (2008: 36) provided a list of five generic performance objectives: (1) quality – doing things right, (2) speed – doing things fast, (3) dependability – doing things on time, (4) flexibility – ability to change the operations (5) cost – doing things cheaply.

Slack and Lewis (2008: 42) stress the importance of examining each performance objective in terms of how they affect market position outside the operation and operations resources inside the operation.

The five mentioned performance objectives can have slightly different meanings depending on how they are interpreted in different operations (Slack and Lewis 2008: 37; Slack, Chambers and Johnston, 2004: 45). Therefore care should be taken when evaluating hospital service operations on these dimensions.

SERVQUAL is a wide spread measuring instrument for judging an overall service performance. It is different from the generic performance objectives in its aggregation level of (a) the “quality” construct and (b) dimensionality of performance objectives.

(30)

29

Conceptual model

The following propositions were tested:

1. Intra patient gap sizes on required and experienced performance are related to patient satisfaction.

2. Inter patient/physician gap sizes on required, experienced and relative importance of performance are related to patient satisfaction.

3. Socio demographic differences between patients and physicians and the frequency of service encounters correlate with the inter patient/physician gap size.

This implies that the dependent variable is dealing with service evaluations. In literature we found differences between (a) the evaluation of the total service and (b) satisfaction. Satisfaction is concerning a transaction, based on the evaluation of the service encounter. As Vandamme and Leunis (1992) stated, patients have difficulties in evaluating the outcome of the service caused by insufficient expert knowledge. Therefore this research is concerning the evaluation of the delivery

process of the service encounter, instead of the medical outcome.

Service evaluations in health care environments are more transaction specific than in other service environments. As Van Damme and Leunis (1992) argued, patients rely more on provided performances during the service delivery process. These experiences are limited to a particular situation. One can argue if this proposition is valid within chronic care, wherein patients experience several service encounters. Within chronic care a total service could also be evaluated besides satisfaction. In an attempt to reduce the chance of measurement errors due to confusion about the two evaluation types, this research is targeting the satisfaction of a single service encounter.

Parasuraman et al. (1985: 44), provided the most extensive oversight of gaps that are influencing service evaluations. The normative “expectations” as used by Parasuraman et al. (1985, 1988, 1991) can be seen as a synonym for “requirements” for the service encounter. We conduct this research from an operations strategy point of view and have less interest in communicational issues regarding realized performances. Therefore gap 4 (service delivery / external communications gap) is not mentioned in our conceptual model.

(31)

30

service exchange, there is no need to explicit gap 2 (the management-perception / service quality specification gap) and gap 3 (the service quality specification / service delivery gap) from the Service Quality Model. Even though Gap 2 and 3 are useful for diagnosing operations improvement possibilities, these intra supplier gaps are not directly visible to the customer. As found in studied literature of Slack and Lewis (2008) closing gap 2 and 3 should be seen as a performance inside the operation with an indirect effect on customer’s judgement of (external) benefit. As elaborately explained in the conclusions based on Harland’s mismatches (1996), perception differences can cause unnecessary or faulty operations improvements. Gap 2 and 3 are causing an indirect effect on patient satisfaction and has to be seen in relation with inter customer/supplier gaps. Gap 2 and 3 have an indirect effect on customer satisfaction, because it is mediated by possible inter-customer-supplier gaps.

As stated earlier, patient priorities on performance dimensions are at least as important to understand as individual patient requirements when evaluating a service encounter (Carman, 1990: 49; Dean, 1999; Slack and Lewis, 2008: 42-44). Slack and Lewis (2008: 44) argue that if an operation produces services for more than one customer group, it will need to determine a separate set of competitive factors, and therefore different priorities for the performance objectives for each group.

Taking these arguments into consideration, four gap variables were included in the Conceptual Model as presented in figure 3.

(1) Intra Patient Requirement/Experience Gap: patient’s requirements – patient’s experiences about the delivery process of the consultation (2) Inter Patient/Physician Requirement Gap: patient’s requirements – physician’s perceptions of

patient’s requirements about the delivery process of the consultation

(3) Inter Patient/Physician Importance Gap: patient priorities on performance dimension – physician perception of patient priorities on performance dimension

(4) Inter Patient/Physician Experience Gap: patient’s experiences – physician’s perceptions of patient’s experiences about the delivery process of the consultation

(32)

31

patient (Woodruff, Cadotte and Jenkins, 1983; John, 1991; Carman; 1990) and could lead to closing the gap on requirements.

O’Connor (1994) argue that differences in physician age, gender and specialty may serve to influence the findings. They suggest that such socio demographic variables should be examined and controlled for both physicians and patients. Therefore these socio demographic differences are included in our conceptual model.

Intra Patient Requirement/Experience Gap Inter Patient/Physician Requirement Gap Inter Patient/Physician Experience Gap Patient satisfaction on delivery process of consultations H6 H8 H7 Inter Patient/Physician Importance Gap H5 Frequency of Service Encounter of the patient

with the same care demand and the same

physician

H1

H2

Socio-demographic differences between

physician and patient

· education

· age

· gender

H3

H4

(33)

32

Based on this conceptual model, eight hypotheses were formulated.

H1: The frequency of service encounters of the patient within the same care demand is inversely related to the inter patient/physician importance gap.

H2: The frequency of service encounters of the patient within the same care demand is inversely related to the inter patient/physician requirement gap.

H3: Socio-demographic differences on (a) education, (b) age and (c) gender between the specialist and the patient are positively related to the inter patient/physician importance gap.

H4: Socio-demographic differences on (a) education, (b) age and (c) gender between the specialist and the patient are positively related to the inter patient/physician requirement gap.

H5: The level of patient satisfaction on the delivery process of consultations is inversely related to the gap size of the inter patient/physician importance gap.

H6: The level of patient satisfaction on the delivery process of consultations is inversely related to the gap size of the inter patient/physician requirement gap.

H7: The level of patient satisfaction on the delivery process of consultations is positively related to the gap size of the inter patient / physician experience gap.

H8: The level of patient satisfaction on the delivery process of consultations is inversely related to the gap size of the intra patient requirement / experience gap.

(34)

33

Methodology

Sample

The research was conducted in a Dutch hospital in Drachten starting from the 16th of April until the 18th of May 2012. The research started by asking physicians specialized in the disciplines: Urology (two physicians), Cardiology (five physicians), Oral Surgery (two physicians), Paediatrics (four physicians) and Orthopaedics (two physicians) to participate. This resulted in 15 physicians that took part in the research. They represented the suppliers’ side of the dyadic service exchange. This sample was chosen to ensure a diverse representation of:

· surgical and non-surgical patients

o surgical: Urology, Oral Surgery, Orthopaedics o non-surgical: Cardiology, Paediatrics

· high and low incidence of urgent patients

o high incidence: Urology (oncology), Cardiology (active or undiagnosed cardiac problems)

o mediocre incidence: Paediatrics

o low incidence: Oral Surgery, Orthopaedics

· patients for treatment and consultations

o treatment: Orthopaedics (injections), Urology (vasectomy) and Oral Surgery (policlinical treatments)

o consultations: Cardiology, Paediatrics, Urology, Orthopaedics.

(35)

34

competences. Keep in mind that the evaluation of the service encounter can also be influenced by management decisions (like choosing furniture in waiting rooms) or nursing staff (like mentioning waiting times while welcoming the patient for the service encounter). During the measurement, respondents were explicitly asked to score variables measuring the specific service offered by the

physician to reduce measurement errors and to ensure measurement of a single service encounter.

Patients with a scheduled appointment were included and patients with a walk-in appointment were excluded. Outpatients with a planned appointment represented the customer side of the dyadic service exchange. 1101 questionnaires were received from patients. 72 respondents were excluded, because these respondents failed to score on more than eight questions (15% of the questions). Missing scores on the fixed sum question were not included, as they could be regarded as a “0” value. When replacing these missing values by “0”, the fixed sum still remained “100”. In addition, 70 respondents were excluded that could not be linked to a patient category. A summary of the patient sample size is given in table 2.

Frequency Percent Cumulative Percent

Valid Cardiology 144 15,0 15,0 Paediatrics 122 12,7 27,7 Oral Surgery 125 13,0 40,8 Orthopaedics 423 44,1 84,9 Urology 145 15,1 100,0 Total 959 100,0

Table 2 Patient Sample Size

Data Collection

Outpatients received a questionnaire from the nursing staff before the service encounter. They were asked to score their required performances and importance of performance objectives

before the service encounter, to prevent that poor service delivery causes a change in requirement

(36)

35

It would increase the duration between measuring requirements/importance and measuring experiences, exceeding the research period.

Because it took approximately eight minutes to score questions about patient characteristics, requirements and importance of performance levels, nursing staff was instructed to hand over questionnaires only when (1) the patient arrived eight minutes before the planned time of appointment or (2) when the appointments were delayed with a minimum of eight minutes. This sampling method could cause slightly biased results, because not all patients were able to fill in a questionnaire. This latent sampling error was considered less biasing than handing over questionnaires to every patient. The possibility that the planned appointments could be delayed by respondents that could not finish the questionnaire before the appointment was considered more biasing. This could influence the delay for subsequent respondents, not caused by the physician’s service delivery, but by the questionnaire.

Respondents were asked to score the experienced performance of the physician directly

after the service encounter, because high response rates were expected when questionnaires are

filled in directly after the appointment (Dean, 1999: 6). This could influence the satisfaction score. Das and Sohnensen (2006) showed in their research, that satisfaction is scored significantly lower by patients interviewed at home compared to those interviewed at the clinic. As a direct registration of the DBC by nursing staff was needed, allowing patients to send questionnaires from their homes implied an impediment for their anonymity. In that case, the identity of patients should be needed for finding the accompanying DBC afterwards.

To increase response and to compensate for time spent on filling in the questionnaire, respondents were offered free exit from the car park of the hospital. Respondents were asked to return the survey to the nursing staff, enabling the staff to register the diagnose and/or treatment part of the DBC-code on the survey.

In measuring inter patient/physician gaps, the physician filled in a questionnaire containing the measured items of the patient questionnaire. In line with earlier gap research (Brown and Swartz, 1989; Silvestro, 2005) the physician’s generalized perception on performance within patient groups, was compared with individual perceptions on performance of the patient to determine the inter patient/physician gap size. Measurement on each service encounter would consume too much consultation time and therefore could bias the physician’s performance during the measurement period.

(37)

36

physicians group patients according to similar performance requirements and importance from a patient’s perspective. Grouping patients was made possible on the basis of (or combining):

1. consultation type

2. diagnose (from a DBC-registration)

3. patient characteristics (age, education level and gender) 4. four types of urgency as perceived by patients

The physician could group patients based on appointments for (1) first consultations in a diagnostic phase, (2) subsequent consultations in a diagnostic phase, (3) treatment and (4) follow-up consultations during and after a diagnose or treatment and (5) other consultations. The four urgency items that could be used for grouping patients were based on the Dutch Triage System, wherein urgency is defined as a formative construct from a patient’s perspective (Coltman, Deviney, Midgley and Venaik, 2008) determined by (1) the possible chance of dying or irreversible damage when treatment takes too long, (2) the intensity of pain, (3) the intensity of anxiety and (4) the extend of inconvenience in daily routines (RIVM, 2012). Keep in mind that these urgency constructs are formulated from a patient’s perspective and could differ from the physician’s judgement. Still, these items were measured in case the physician is (partly) basing his/her performance delivery on patient’s perspectives on urgency.

(38)

37

It became clear that during data collection, physicians experienced difficulties. First, physicians stated that they were not used to grouping patients based on homogeneous performance requirements/importance. Second, patients in a diagnose phase could not always be grouped based on a DBC, because within some specialties the DBC represented a heterogeneous group with a diversity of initial healthcare demand and associated performance requirements. Only after diagnosing the DBC represents a homogeneous group.

Every physician made a distinction between follow-up consultations and other appointments. Even within the follow-up consultations, different categories were made, primarily depending on (1) chronicity and associated frequency of follow-up appointments / duration of follow-up period and (2) still active health risks and associated anxiety. Eight physicians made no distinction between first and subsequent appointments in a diagnose phase. Distinction within a diagnose phase, were mostly based on latent health risks and associated anxiety or pain. One physician grouped patients on age.

Research instrument

In line with conclusions of O’Connor (1994), the questionnaire for patients was started with socio demographic questions. The Dutch SOI-scale (standaard onderwijsindeling) 2006 was used for measuring education level as published by the Dutch statistic institute CBS. Also the amount of previous experiences of the patient with the physician was measured.

Questions relating to the four formative indicators of the urgency construct were included. Medical urgency, anxiety and inconvenience in daily routine were measured by a 5-point Likert-scale. Pain was measured by the widely validated Numeric Rating Scale (NRS score) for pain perception (Aicher, Peil and Diner, 2012: 186).

Measurement of performance requirements were based on the generic performance dimensions of Slack and Lewis (2008). The cost dimension was excluded. As mentioned earlier, Varkevisser (2009: 71) stated that in the Netherlands, costs are not clearly visible or relevant for most of the patients. Therefore it was expected that respondents experience difficulties in answering these questions. Most of the health care demand is insured and therefore paid indirect by patients by insurance premiums.

(39)

38

make answering more (time) convenient for respondents. Statements concerning tangibles, assurance, empathy and flexibility were phrased in “should”-wording and could be scored on Likert-scales. Using “should”-wording for measuring expectations in the questionnaire, could lead to unrealistically high expectations scores (Parasuraman et al., 1991). Therefore Parasuraman et al. (1991) suggests choosing wording focussing on what customers would expect from companies delivering excellent services. Vandamme and Leunis (1992: 34) mentioned that assessment of the outcome or the delivery process of a service is typically more transaction orientated in health care services, than in other services. In line with Vandamme and Leunis, it was expected that patients could experience difficulties in choosing an excellent physician due to a lack of experience with other (excellent) physicians within the same circumstances. Also the abstraction level needed to formulate expectations from excellent services, could lead to problems when the patient has cognitive problems (due to ailment or disease or low intelligence levels). Therefore using “should”-wording in the questionnaire was chosen.

As concluded in the literature about the speed dimension, run times (consultation duration) and queue times (access times / waiting times) are directly visible to the patient. Therefore access times were included in days for the planned appointments. Because durations of consultations are not mentioned when booking an appointment, it was expected that respondents experience difficulties in answering questions about the duration of the appointment. Also the lack of medical knowledge could cause difficulties in formulating expectations about the duration of a consultation or treatment. Instead a more qualitative question was included whether the patient experiences enough attention for the consultation/treatment. While this measurement is not time-based, this variable was included in the quality dimension.

For a measurement of dependability, a time-based question was introduced concerning rescheduling tolerance. Respondents were asked within which time span a rescheduled appointment date is acceptable and how many weeks before the rescheduled appointment the patient should be informed. Also the delay of appointments was measured, by measuring the difference between the planned and actual starting time of the service encounter. Wording was carefully chosen, to phrase that this time difference is concerning a dependability issue instead of a speed issue.

Only flexibility in delivery was measured. The aim was to measure single service encounter evaluations. Therefore flexibility in (1) development of treatments, (2) adjusting the mix of treatments and (3) flexibility in dealing with volume fluctuations were considered less relevant for this research.

(40)

39

questionnaire. These respondents visited the paediatric and orthopaedic physicians. Also 10 specialists were asked about dimensions for segmenting patient groups. This was conducted, to make sure that patients’ responses can be properly segmented afterwards to compare it with a generalized perception of the physician about a patient group.

After pretesting, eight adaptations were applied in our questionnaire. First, respondents were not precise while scoring on a 7-point Likert-scale. Therefore there was no confidence that the proportions represented the opinion in a linear way. A more robust 5-point Likert-scale was used, impeding precision but strengthening validity. Second, some respondents experienced problems with the fixed-sum score to measure relative importance. Therefore guidance was provided in the procedure for using the fixed-sum. An example of the fixed sum was shown without revealing the performance dimensions which are measured. Third, some statements were rephrased from “my appointment to “this appointment”, to make sure that the measurement was dealing with the service encounter at hand. Fourth, a question was included about specific conditions of the patients that could impede scoring the questionnaire properly (for instance respondents with aphasia or partly illiterate respondents) and reasons for not finishing the questionnaire (for instance “bad news”-consultations). Fifth, for speed dimensions, absolute scores were used instead of categorized scores. It became clear that some physicians used a non-linear categorised scale for urgency. Therefore more precision was in line with general urgency classifications used by physicians. As an example, some physicians seemed to categorize the required access time in (1) within 48 hours, (2) within one week and (3) elective. Sixth, a questionnaire version for parents and supervisors of children/pupils was constructed, while their patients were not always capable of judging the service encounter. In this version the perceptions of the parents and supervisors were measured, instead of the patient. Therefore some questions were rephrased that deal only with the patient (like pain scores), to make clear that the questions concern the patient and not the parent or supervisor. Also some quality statements were rephrased to include the parent/supervisor as well as the patient as the grammatical direct object. Seventh, an additional version of the questionnaire was made for physicians with rephrased statements. These statements were rephrased starting with “The patient thinks….” to make sure an estimation of patient perceptions is scored instead of own perceptions on performance levels/importance. Eight, the overall evaluation of the service encounter was not always scored. This question was visualised better, by marking the location for the score by a box.

(41)

40

Data preparation

After data collection the data matrix was prepared as described below.

1. When a subsequent or treatment consultation of an urologist took place together with a follow-up consultation, and the coding of the DBC started with the number 21 (follow-up DBC), the consultation was regarded as a consultation with the expectations of a follow-up consultation. This procedure was done after agreement with the secretary staff of the urologists who coded the questionnaires. An extra variable was constructed measuring this DBC-type (11 = initial DBC / 21 = follow-up DBC).

2. When a first consultation and a subsequent consultation of an orthopaedic surgeon took place together with a follow-up consultation, the respondent was not placed in a patient category (missing score). When a treatment and a follow-up consultation took place at the same time, this was regarded as a consultation with the expectations of a follow-up consultation. When a treatment and a first or subsequent consultation took place at the same time, we regarded this as a consultation with the expectations of a first or subsequent consultation. This procedure was done after agreement with the nurse practitioner Orthopaedics as she stated that the treatment was concerning a quick injection that is (financially) scored as a treatment consultation.

3. When two DBC’s were scored in one consultation, this respondent was not placed in a patient category (missing score) when the orthopaedic surgeon grouped patients based on a DBC. In this case it was not sure what the main problem for consultation was.

4. When the total of the fixed sum score was between 80 and 120, the scores on the categories were recalculated with the formula:

Referenties

GERELATEERDE DOCUMENTEN

n Studie van historiese tendense en ontwikkelinge ter reeling van die vryheid van die pers om feite mee te peel en meninge uit te spreek, met besondere

First, an alternative approach to the classical definition of the calculus for general strongly continuous semigroups is presented, motivated by notions from linear sys- tems theory.

Although this development helped to increase the liberation of programming languages from the von Neumann architecture, in software engineering, the semantics of problem domain

It starts out with answering the important question whether sentinel lymph node biopsy leads to more intralymphatic metastases, then gives an update on

Overall, when the above suggestions for future research are examined, future eHealth services could eventually be adapted to support equal benefits from eHealth services for

How retail service elements influence a customer experience is discussed first, followed by tangible and intangible service elements, the elements representing

Although the subproblems of face recognition (like registra- tion, illumination, classification, etc.) have been addressed in many publications [22], [52], [53], there have been