• No results found

Numbers telling the tale?: On the validity of patient experience surveys and the usability of their results

N/A
N/A
Protected

Academic year: 2021

Share "Numbers telling the tale?: On the validity of patient experience surveys and the usability of their results"

Copied!
209
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Tilburg University

Numbers telling the tale?

Krol, M.W.

Publication date: 2015

Document Version

Publisher's PDF, also known as Version of record

Link to publication in Tilburg University Research Portal

Citation for published version (APA):

Krol, M. W. (2015). Numbers telling the tale? On the validity of patient experience surveys and the usability of their results. CPI Koninklijke Wöhrmann.

General rights

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain

• You may freely distribute the URL identifying the publication in the public portal Take down policy

If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.

(2)

Numbers telling the tale?

On the validity of patient experience surveys and the

usability of their results

(3)

ISBN 978-94-6122-305-0 ©2015 Maarten Krol

Cover design : Frank Roose

Word processing/lay out : Christel van Well / Doortje Saya, Utrecht Printing : CPI – Koninklijke Wöhrmann, Zutphen

(4)

Numbers telling the tale?

On the validity of patient experience surveys and the

usability of their results

Is meten weten?

Over de validiteit van patiëntervaringsvragenlijsten en de

bruikbaarheid van hun resultaten

PROEFSCHRIFT

ter verkrijging van de graad van doctor aan Tilburg University op gezag van de rector magnificus, prof. dr. E.H.L. Aarts,

in het openbaar te verdedigen ten overstaan van een door het college voor promoties aangewezen commissie in de aula van de Universiteit

op vrijdag 12 juni 2015 om 14.15 uur door

Maarten Watse Krol

(5)

Promotiecommissie

Promotor Prof. dr. D.M.J. Delnoij Copromotores Dr. D. de Boer Dr. J.J.D.J.M. Rademakers Overige leden Prof. dr. R. Huijsman Prof. dr. R.T.J.M. Janssen Prof. dr. J. Kievit Prof. dr. J.J. Polder Prof. dr. C. Wagner

The research presented in this thesis was conducted at NIVEL, Netherlands Institute for Health Services Research, Utrecht, The Netherlands. NIVEL participates in the Netherlands School of Primary Care Research (CaRe), acknowledged by the Royal Dutch Academy of Science (KNAW).

(6)

“The search for easy ways to measure a highly complex

phenomenon such as medical care may be pursuing

a will-o’-the-wisp.”

Avedis Donabedian

“As no better man advances to take this matter in hand,

I hereupon offer my own poor endeavours.”

(7)
(8)

Contents

1. General introduction 9

2. Exploring young patients’ perspectives on rehabilitation care: 27 methods and challenges of organizing focus groups for children and adolescents

3. Consumer Quality Index Chronic Skin Diseases (CQI-CSD): 41 a new instrument to measure quality of care from patients'

perspective

4. Complementary or confusing: Comparing patient experiences, 57 patient reported outcomes and clinical indicators of hip and

knee surgery

5. Overall scores as an alternative to global ratings in patient 77 experience surveys: a comparison of four methods

6. The Net Promoter Score – an asset to patient experience surveys? 93

7. Patient experiences of inpatient hospital care: a department 107 matter and a hospital matter

8. Discussion 127

References 149

Summary 179

Samenvatting (summary in Dutch) 187

Dankwoord (acknowledgements in Dutch) 197

About the author 201

(9)
(10)

Chapter 1: General introduction 9

1

(11)

10 Numbers telling the tale?

Health insurers

Healthcare providers Healthcare users

Healthcare policy and quality information

In 2006, both the Healthcare (Market Regulation) Act (WMG) and the Health Insurance Act (ZVW) came into force in the Netherlands (Staten-Generaal, 2005; NZa, 2006). The purpose was to introduce a system of regulated competition between three main stakeholders: healthcare providers, patients and health insurers (Enthoven and Van de Ven, 2007). The system of competition involves three ‘markets’. In each of these markets, two of the three main stakeholders interact with each other. As a fourth stakeholder, Dutch governmental bodies work as regulators of the healthcare system, continuously monitoring the healthcare market and intervening when needed (e.g. the healthcare inspectorate and the Dutch healthcare authority). The figure below, taken from the 2014 Zorgbalans (National report on the performance of the Dutch healthcare system) shows how the stakeholders are related to each other in this system (Van den Berg et al., 2014a).

Figure 1.1 Regulated healthcare competition in the Netherlands

Source: Van den Berg et al., 2014a

The central idea is that these markets should lead to a more efficient and sustainable healthcare system: quality improvement and lower costs of care. Valid, reliable and usable information about the quality of care is deemed central to the system, as a lack of such information is thought to result in competition based on prices, at the expense of quality of care. However, each stakeholder has different information needs, according to their role within the healthcare system and the markets shown above (Van den Berg et al., 2014b). Patients. One mechanism that is central to the policy of the WMG is the supposed role of the patient as an active consumer in healthcare: as they would when purchasing a particular commercial service or product, patients

Health insurance

market Healthcare purchasing

market

(12)

Chapter 1: General introduction 11 are expected to actually choose the best care for themselves (healthcare

market). The same goes for choosing a health insurer; every individual is required to have basic health insurance, but is free to choose their own health insurer and switch at the end of each year (health insurance market) (Enthoven and Van de Ven, 2007). Patient choice is supposed to be one of the drivers of healthcare quality. Patients therefore need information about the quality of the care delivered. Similarly, they need information to let them choose a health insurer that fits their needs and interests.

Healthcare providers. Performance information can show healthcare providers how they are performing compared to their fellow providers (and competitors), and therefore also which elements of care need improvement (Porter, 2010). A cycle of monitoring care, interpreting performance information and adapting the care process accordingly can be used by healthcare providers to improve quality of care (healthcare market) (Berwick et al., 2003; Zuidgeest, 2011). Also, healthcare providers are accountable for the quality of care they deliver. They have to publish annual quality reports on several quality indicators to inform the healthcare inspectorate and the public about their performance.

Health insurers. Health insurance companies are private corporations, operating under an extensive regulatory system; most have commercial interests in healthcare. It is in their interest to purchase the most efficient care: the best possible care at the lowest possible price. To this end, health insurers negotiate prices with healthcare providers to purchase healthcare for their clients. In these negotiations, they are expected to not only weigh up the costs, but also quality of healthcare (Grol, 2006). The providers that perform best may receive better contracts, or health insurers may choose to selectively contract specific healthcare providers (healthcare purchasing market). It is also interesting for insurers to acquire a large market share; this strengthens their purchasing position and their influence. To recruit clients, insurance companies try to present themselves as attractively as possible. This may be done for example by offering an attractive premium or by showing that they have contracted the best healthcare providers (health insurance market) (Enthoven and Van de Ven, 2007).

Government. In addition to these three central stakeholders, governmental bodies such as the healthcare inspectorate and the Dutch healthcare authority may use information to assess the quality and safety of healthcare.

(13)

12 Numbers telling the tale?

Table 1.1 Functions of care quality information for the various healthcare stakeholders

Patients/healthcare users Choosing a healthcare provider Choosing a health insurer

Healthcare providers Internal quality improvement

Accountability towards health insurers, government and society

Health insurers Healthcare purchasing

Advertising the quality of the healthcare contracted

Government Monitoring quality and safety

Encouraging quality, affordability and equity of care

Source: Van den Berg et al., 2014b

In short, information about the quality of care is seen as a vital component in the Dutch healthcare system. There are three main sources for obtaining this information: healthcare institutions, individual healthcare providers, and patients (healthcare users). Healthcare institutions can collect information about how they have organized their care, such as accessibility, facilities, processes and outcomes of care. This may include information collected and recorded by healthcare providers themselves or by independent observers, e.g. by recording treatments, complications, and the prescription and provision of medications (Luce et al., 1994).

Patients can also be a source of information. For example, patients with chronic conditions such as diabetes may be deeply involved in their own treatment and report their own observations on their condition. In this respect, patient self-care may provide healthcare providers with useful information for tailoring treatments (Toobert et al., 2000; Swan, 2009).

(14)

Chapter 1: General introduction 13 information because they are the only ones who can report on aspects of care

such as patient-centeredness. In addition, patient experiences do not necessarily concur with those of healthcare providers (Sitzia and Wood, 1997; Burney et al., 2002; Zuidgeest et al., 2011).

Before we move on to how to obtain, measure and analyse patient experiences, a few other issues will be discussed first. For instance, what constitutes ‘quality of care’? How can it be operationalized? And which parameters or variables should be taken into account?

Measuring quality of care from the patients’ perspective

Defining quality of care indicators

The World Health Organization (WHO) states that quality of healthcare can be divided into six domains (WHO, 2006). According to the WHO, healthcare should be:

- effective (evidence-based, resulting in improved health and based on need);

- efficient (maximizing the use of resources and avoiding waste);

- accessible (timely, geographically reasonable, at an appropriate location) - acceptable/patient-centred (taking into account preferences, aspirations

and cultures of healthcare users);

- equitable (not varying because of personal characteristics of healthcare users);

- safe (minimizing risks and harm to healthcare users).

The fourth domain makes it clear that the WHO believes that the patients’ perspective is highly important. This underlines the relevance of studying the patients’ perspectives on the quality of care. Evaluations and measurements are needed in order to examine whether healthcare is in accordance with these six domains. To this end, aspects of quality of healthcare can be translated into measurable units, commonly referred to as quality of care indicators.

(15)

14 Numbers telling the tale? it is not) and are often considered prerequisites for good care. The data needed for determining the structural indicator are often collected by the healthcare providers themselves or by an (independent) observer, but are sometimes reported by patients (for example, the accessibility of a healthcare facility).

Process indicators concern the actions of healthcare professionals and the care process. Examples are how often a professional complies with certain guidelines, how easily patients can get in touch with a care professional, whether patients were provided with clear and accurate information on possible treatments, etc. Process indicators may show how the professionals interact and communicate with their patients (and possibly with other healthcare professionals). This can be evaluated either by healthcare providers, by patients, or both.

Outcome indicators, finally, report on the outcome or result of the provided care. This may for example be the success of an operation, the status of the health problem of the patient at the end of the treatment, or even the overall quality of life. Outcome measurements may focus not only on technical, biological and physiological outcomes, but also on psychosocial outcomes for the patient, such as quality of life (Wilson and Cleary, 1995). Patient surveys may also include satisfaction measures such as a global rating of the quality of care that patients received, or whether they would recommend their healthcare provider to other patients.

Clinical outcomes are often reported by healthcare providers themselves, e.g. mortality rates and complications in treatment (McIntyre et al., 2001). Where outcome indicators measure adverse outcomes that could point to safety issues, the Healthcare Inspectorate (IGZ) requires healthcare providers to record the information. Increasingly however, the various stakeholders are looking for ways in which the patient can also indicate whether a treatment has improved their functional status.

(16)

Chapter 1: General introduction 15

Table 1.2 Structure, process and outcome indicators from healthcare providers’ and

patients’ perspectives

Structure indicators Process indicators Outcome indicators

Healthcare providers

What do they measure? Organization of care, preconditions for safe and effective care

Practicing of healthcare

professionals and the care process

Outcome or result of provided care

Examples Quality management

systems, organization of patients’ privacy, staff training levels

Guideline adherence Mortality rates, complications

Patients

Specific term for measurement from the patient’s perspective Patient-Reported Experience Measure (PREM) Patient-Reported Experience Measure (PREM) Patient-Reported Outcome Measure (PROM)

Examples of indicators Accessibility of the healthcare facility

Communication with professionals, providing clear and understandable information, shared-decision making

Success of operation, status of the health problem at the end of the treatment, quality of life, appraisal of outcome (satisfaction)

Measuring quality of care indicators from the patients’

perspective

The CQ-index

(17)

16 Numbers telling the tale?

Box 1.1 The Consumer Quality Index

What is the Consumer Quality Index (CQ-index or CQI)?

- National standard for measuring healthcare quality from the perspective of healthcare users. - Based on American CAHPS (Consumer Assessment of Healthcare Providers and Systems) and

Dutch QUOTE (QUality Of care Through the patient’s Eyes) instruments. - Collection of instruments (surveys or interview protocols).

- Collection of protocols and guidelines for sampling, data collection, analysis, and reporting formats.

What is measured by the CQ-index?

- What healthcare users find important in healthcare. - What their actual experiences are.

- How they rate the overall quality of care.

What types of questions are included in the CQ-index?

- Frequency with which quality criteria are met: Never, sometimes, usually, and always. - Importance of quality criteria: Not important, fairly important, important, and extremely

important.

- Access to care and the degree to which lack of access is perceived as a problem: A big problem, a small problem, not a problem.

- General rating of the quality of care: Scale from 0 (worst possible) to 10 (best possible) or likely to recommend: -Scale from 0 (not at all likely) to 10 (extremely likely).

- Effects of care and adherence to professional guidelines.

- Background characteristics: Age, gender, ethnicity, education, and general health status.

Source: Sixma et al., 2008

(18)

Chapter 1: General introduction 17 the most relevant questions have been included, if participants understand

what is meant by each survey item, and whether the response categories reflect their actual experiences. Next, a quantitative study is carried out using the survey among larger samples of patients to establish the psychometric properties of the questionnaire.

The survey data are used for comparing the performance of care providers, using multilevel analyses and controlled for case mix (Zaslavsky et al., 2001). This not only takes account of the clustering of experiences for each healthcare provider, but also looks at differences in the numbers of respondents per provider (Damman et al., 2009a).

The results are then presented to all parties involved, focusing on the information needs of various stakeholders and their respective questions: - Are there significant differences between providers? (De Boer et al., 2011) - Which providers perform best and which are underperforming?

- What are the main areas for quality improvement?

To this end, the overall results are publicly reported and healthcare providers may receive individual performance reports. Depending on the owner of the data, these reports may be made available on request.

PROMs

Outcome indicators from the patient's perspective (PROMs) have been around for several years, but have increasingly sparked interest among various stakeholders over the last decade, particularly when measuring the quality of care is involved.

(19)

18 Numbers telling the tale? Netherlands, including PROMs in patient experience surveys is also advocated, and they have been implemented in surveys on elective surgery (e.g. CQI Hip/Knee Replacement, and CQI Varicose veins (Miletus, 2014)). The use of PROMs in combination with PREMs may provide more information about the quality of healthcare; not only about the care process and communication with professionals, for instance, but also about the outcomes of care.

Current issues in patient experience research

Eight years after the implementation of the 2006 Healthcare (Market Regulation) Act in the Netherlands, the first evaluations have been made of its policy assumptions and whether quality of care information has lived up to its expected value in the relationships between the main stakeholders. One of these evaluations has focused specifically on the CQ-index. An overview of the studies in its first five years of existence gave a rather positive picture of patient experience surveys of the Dutch CQ-index, as well as some points of concern and opportunities for improvement (Hopman et al., 2011). For instance, in some cases, there were conflicts of interest between stakeholders regarding the contents of the survey and the purpose of the research. The results were also not always understandable for all stakeholders, which impeded their use of the data.

In this respect, some issues arise regarding two major criteria in survey methodology: the validity of patient experience surveys and the usability of their results. The validity of the surveys hinges on the extent to which the surveys include relevant aspects of quality of care (face and content validity) and whether the survey results are in concordance with corresponding measures (convergent validity) (Streiner and Norman, 1999a; Gravetter and Forzano, 2012). The usability of patient experience survey results is about whether stakeholders are able to use these results. For example, are they able to interpret the results that are presented? Can they act upon these results to choose a healthcare provider (patients), to improve care (healthcare providers) or to purchase good quality care (health insurers)? In this section, we will briefly consider these issues for the current Dutch healthcare system. Validity of patient experience research

(20)

Chapter 1: General introduction 19 (e.g. people with dyslexia or aphasia, paediatric oncology (Tates et al., 2009;

Ruijter et al., 2014). This development emerged partly from the criticism that patient surveys were not specific enough for patients to fully report their experiences, or for healthcare providers to identify possibilities for quality improvement. Also, it was suggested that there could be differences in the perspectives of patients from differing cultural backgrounds. In turn, there could be differences in the aspects of care that they think are most relevant (Asmoredjo et al., 2013). This implies that a generic survey may not cover all the aspects of care relevant to subgroups in the population. The insights gained by developing these ‘specialized’ surveys may be used as examples for future research, as they often involve innovative methods for engaging patients in research.

Over recent years, a number of Dutch patient surveys used in nationwide surveys have been shortened considerably (Triemstra et al., 2008). A fear that is often expressed in discussions about the CQ-index is that their length has made surveys too demanding for patients; a shorter survey may also be sufficient to get a rough indication of the level of quality within a certain healthcare sector. Furthermore, shorter questionnaires are cheaper to send and to analyse. For health insurers, an important reason for shortening surveys is to primarily include items that are able to show statistical differences in performance between healthcare providers. However, this involves some issues regarding the content validity; it is still important to include the quality of care aspects most important and relevant to the specific patient group.

The addition of PROMs to CQI questionnaires is partly driven by the use of patient experience in healthcare purchasing by health insurers. Since 2012, a number of nationwide studies using CQI surveys on elective care (Hip/Knee arthroplasty, Varicose vein treatment and Cataract treatment) have included PROMs, both generic (EQ-5D) and treatment-specific (Kind et al., 2005; Miletus, 2014). On average, process indicators from CQI surveys have shown limited differences in healthcare provider performance (Hopman et al., 2011). In this respect, health insurers in particular hope that PROMs will yield more differences between healthcare providers.

Use and usability of patient experience research

Healthcare market

(21)

20 Numbers telling the tale? (Victoor et al., 2012a). Their research showed that a great deal of effort and funding by the government was directed at presenting information about quality of care, including patient experiences. Damman et al. investigated how patients handle and interpret such quality information, leading to clear recommendations on how to present this information to the public (Damman et al., 2009b; 2012). Nonetheless, presenting results in a clear and understandable way to patients, using suitable media, remains a challenge (Zwijnenberg et al., 2012).

In order to determine whether these efforts are worthwhile, it is important to know more about the willingness of patients to actually use this information (Victoor et al., 2012a). Research shows that patients use quality information very sparingly and selectively when choosing a caregiver (Faber et al., 2009). And when they do, it is not necessarily decisive; patients seem to rely especially on advice from their general practitioner and from friends or family for the actual choice of a healthcare provider (Bes and Van den Berg, 2013). The proximity of the healthcare provider is also important in their choice (Victoor et al., 2012b; 2014a). The fact that many patients do not use quality information does not necessarily have anything to do with reluctance or disinterest. In fact, not all patients are able to interpret or use the information effectively (Rademakers et al., 2014). A recent study shows distinct differences in the characteristics of patients who sought and/or actively used information in choosing a provider, and those who did not (Victoor et al., submitted). As research so far suggests, not all patients are willing or able to act as informed consumers who make active choices. There are claims that the amount of information presented to the public and the multitude of data sources make it difficult for people to effectively search and interpret information. More than half (55%) of the Dutch population have difficulty finding comparative information about the quality of hospitals on the Internet (Nijman et al., 2014). Summarizing information and guiding people to appropriate websites might help improve this situation (Damman et al., 2012).

Internal improvement and accountability

(22)

Chapter 1: General introduction 21 The results are often considered too abstract (e.g. limited survey topics) to

identify possibilities for improvement, or not detailed enough (e.g. only available at institutional level) to subsequently tailor interventions. Although most healthcare providers are highly motivated to deliver high-quality care, they may need guidance to interpret and use information from quality of care research (ActiZ et al., 2011). Consequently, targeted action aimed at quality improvement is often difficult to implement in healthcare organizations and requires additional effort (Bosch et al., 2007; Winters et al., 2014).

Healthcare purchasing market

One of the aspects of regulated competition is that health insurers may choose to selectively contract healthcare providers when purchasing care. Selective purchasing of care is based on the insurer’s own criteria, in which quality indicators can play a role. Information about the quality of care is also used to discuss potential quality improvements with healthcare providers. These improvement plans and their results are used to differentiate fees for healthcare providers. Even though selective purchasing has increased in the past two years (NZa, 2014), it has been established that the use of quality of care information is still limited compared to other factors. Negotiations are primarily about costs, but also to some extent about healthcare volume (number of treatments) as a proxy for quality (Westert et al., 2010; NZa, 2014; Van Kleef et al., 2014). As mentioned earlier, one of the reasons for the limited use of quality of care information is that the results of many patient experience surveys show little difference in the quality of care between providers (Hopman et al., 2011). Consequently, it is difficult for health insurers to use this information for selective purchasing purposes. Also, rewarding the providers who perform best is irrelevant if all scores are similar.

(23)

22 Numbers telling the tale? best-performing providers. The same goes for items used to get an overall view of the patient’s experience of quality of care, for instance a global rating. By carefully analysing and presenting such data, researchers may help the work of health planners in purchasing the best care (Zema and Rogers, 2001). Additional incentives might potentially improve the case for quality of care in healthcare purchasing (Custers et al., 2007).

Health Insurance Market

Some health insurance companies advertise their ability to support patients in choosing healthcare providers, or to enable patients to receive the best possible care. In order to provide sound advice, many insurers use information about the quality of care. However, this service provided by health insurers does not seem to be a major determining factor for Dutch people when choosing a health insurer. Even though the 2006 WMG made it possible for people to change their insurer every year (sparking competition between health insurers), the annual percentage of people changing insurer since 2006 has been between 4 and 10% (Romp and Merkx, 2013; Reitsma-Van Rooijen and De Jong, 2014). Freedom to choose a healthcare provider is an important issue for many people when selecting a health insurer (Reitsma-Van Rooijen et al., 2011). But more importantly for the system, the reason most widely stated for switching during recent years was the premium (approx. 40%), with less than 1% of people changing insurers because of the quality of contracted care (Brabers et al., 2012). Six out of ten Dutch people are hardly aware of the differences between health insurers, and 43% are unable to find comparative quality information on insurers on the Internet (Nijman et al., 2014).

Therefore, in the competition between health insurers to enrol people into their schemes, quality of care does not (yet) seem to play a major role. As with the healthcare market, there are initiatives to facilitate comparisons of health insurance policies for the general public, by gathering and summarizing quality information (Consumentenbond, 2014; Independer, 2014). These mostly involve (commercial) websites through which members of the public can enrol in a particular health insurance scheme. Such endeavours to help people select an appropriate insurance policy may not be in vain. The Consumentenbond (Dutch organisation for the protection of consumers) reported that Dutch people had over 1,300 different health insurance policies to choose from at the end of 2014 (VARA, 2014).

(24)

Chapter 1: General introduction 23 stakeholders. This acknowledges the relevance of further research to better

understand and then improve the validity of patient experience surveys and the usability of their results.

This thesis

The studies in this thesis are intended to contribute to the understanding or even the improvement of the validity of patient surveys, the usability of their results, or both. This thesis seeks to answer two general questions:

1. How can the validity of patient experience surveys be improved?

2. How can the usability of patient survey results be improved for stakeholders?

This thesis includes six studies on these subjects, as shown in Table 1.3. Validity being a broad concept in itself, it may be useful to define the dimensions of validity considered in this thesis: face validity, content validity and construct validity.

Face validity is the degree to which a survey item at first glance seems to cover the concept it aims to measure (Streiner and Norman, 1999b; Mokkink et al., 2012). This may involve a subjective judgement, which can vary according to the specific individual. Content validity concerns the relevance of survey items regarding the concept being measured, and whether the survey as a whole covers this concept (Streiner and Norman, 1999c; Mokkink et al., 2012). As an element of construct validity, convergent validity can be assessed by examining how closely an item or measure is related to other measures to which it could theoretically be related (Streiner and Norman, 1999a).

Usability concerns the extent to which the results of patient survey research are comprehensible and interpretable for stakeholders, enabling them to use the results.

(25)

24 Numbers telling the tale?

Table 1.3 Relating the study aims of the thesis studies to the two general research

questions

Chapter (setting) 1. Validity 2. Usability

Ch2. Gathering survey content: focus group meetings and online focus groups

(rehabilitation care)

Organizing focus groups to obtain relevant information for modifying a survey for adult patients to create valid measurements for children and adolescents.

-

Ch3. Developing, evaluating and optimizing a patient experience survey

(chronic skin disease care)

To underpin the content validity of a patient experience survey, in development, psychometric testing and optimization.

-

Ch4. PREMs, PROMs, and clinical indicators

(hip/knee arthroplasty)

Inclusion of PROMs in a patient experience survey, assessment of their relationship with PREMs, for construct validity.

To determine associations between PREMs, PROMs and clinical indicators; linking patient experiences, effectiveness and safety of care.

Ch5. Constructing overall scores from patient experiences

(nursing home care)

To construct an overall score to summarize patient survey results; does this provide a valid representation of patient experiences?

Do multiple methods to summarize patient experiences lead to different results? And which are most usable?

Ch6. Including the NPS in a patient experience survey

(inpatient and outpatient hospital care)

Construct and convergent validity: inclusion of an

alternative summary score (NPS) in a patient experience survey.

To examine response patterns of ‘summarizing’ measures, in search of improved

differentiation.

Ch7. Specificity of data and data analysis from patient experience surveys

(inpatient hospital care)

Level of detail in measuring experiences: the appropriate level for obtaining valid measurements and analyses (department vs. hospital)

To determine the specificity of data/results: the use of multilevel structures in analyses to identify the appropriate level of influence.

(26)

Chapter 1: General introduction 25 published in the past looking at the development of patient surveys, including

CQI questionnaires. However, it is not always evident how the contents of these questionnaires were determined; many studies do not describe the qualitative data collection used to gather the aspects included in the questionnaire. Also, as mentioned earlier, there is increasing interest in tailoring surveys to specific patient groups, of which Chapter 2 is an example. Chapter 3 describes the entire process of development and psychometric testing of a patient experience survey, in this case the CQI on Chronic Skin Diseases. For instance, how are the contents of the survey obtained and which tests are used to investigate its validity and reliability?

Traditionally, patient surveys consisted mainly of patient evaluations of the care process (PREMs), but there is increasing interest in the evaluation of treatment outcomes by patients themselves. In this respect, it is interesting to look at the addition of outcome indicators (PROMs) to patient surveys. It is important to note that a PROM score may depend not only on various aspects of care, such as the organization of the care process, but also on patient characteristics. Therefore, in order to obtain a more comprehensive view of quality of care, a combination of structure, process and outcome indicators (i.e. PREMs and PROMs) may be useful. So what do PROMs specifically add to the questionnaires and how do PREMs and PROMs relate to each other? By examining this, we can get an idea of their construct validity. Chapter 4 provides an example for a patient survey on Hip/Knee arthroplasty. Moreover, this study also assesses the associations between PREMs, PROMs and clinical indicators reported by healthcare providers themselves, thus attempting to link patient experiences, effectiveness and safety of care.

Patient surveys generally produce numerous results: multiple performance scores – both detailed and aggregated – figures, tables, and so on. Is it perhaps possible to summarize these results by calculating an overall score? If so, this information could be used to get a quick view of healthcare quality, without having to go into too much detail. If such overall scores are used as summary measures for patient experiences, it is an important condition that these scores should be representative of actual patient experiences, thus relating to their construct validity. In Chapter 5, the construct validity and usability of a number of potential overall scores are investigated, using data from the CQI on Nursing Home Care.

(27)

26 Numbers telling the tale? with healthcare. Because the NPS is considered to replace some existing questions in patient surveys as an outcome indicator, their respective relationships with actual patient experiences are investigated. Potentially, the NPS allows for more differentiation in measuring the willingness to recommend a healthcare provider. Also, its methodology includes the calculation of a single score, which is supposed to represent the loyalty of clients or patients. Chapter 6 contains this study on the construct validity of the NPS, using data of the CQI Inpatient Hospital Care and CQI Outpatient Hospital Care.

Another point about the level of detail of survey results is the level at which patient experiences are measured, analysed and presented. The validity of results depends in part on whether the right unit of observation is being described. In the case of hospital care, it may be important to know the quality of care in different departments, in addition to aggregated information at the hospital level. It has already been mentioned that the specificity of the data is very important for its usability. In this case, this was examined for data on the CQI Inpatient Hospital Care, as presented in Chapter 7. This included a measurement issue on multilevel structures: is it relevant to include the department level in analyses, in addition to the hospital level? And are there perhaps structural differences between types of departments? If so, this might provide more specific and therefore more useful information.

(28)

Chapter 2: Organizing focus groups for young patients on rehabilitation care 27

2

Exploring young patients’ perspectives on

rehabilitation care

Methods and challenges of organizing focus groups for

children and adolescents

This article was published as:

(29)

28 Numbers telling the tale?

Introduction

The importance of the patients’ perspective in quality of care research is widely recognized (Fung et al., 2008; Delnoij, 2009). Patients can provide information that cannot be collected otherwise, for example about how they experience the care process, or the perceived effectiveness of care.

Although many studies have focussed on the preferences and experiences of patients in healthcare, not every patient group is heard. Children, for instance, are often not included in patient surveys. Even though many studies claim to focus on the experiences of children in healthcare, often, information reported by parents, as proxies for their children, is used (House et al., 2009; Lindeke et al., 2009). Children themselves, however, have their own healthcare preferences and experiences. These do not necessarily concur with those of their parents (Van Beek, 2007; Knopf et al., 2008).

Fortunately, the importance of involving children in quality of care research is more and more recognized (Lightfoot et al., 1999; Siebes et al., 2007; Watson et al., 2007; Lindeke et al., 2009; Pelander et al., 2009). Children and adolescents are perfectly able to give their own opinions about their healthcare, if they are given the right opportunity to do so. Children as young as 8 years old are capable of participating individually in (online) surveys (Borgers et al., 2000; Lindeke et al., 2009). Before this age, however, the cognitive skills necessary for self-reflection and the ‘question answering process’ are usually not yet developed (Piaget and Inhelder, 1969; Borgers et al., 2004).

Our research aimed to give young patients an opportunity to speak up for themselves in the development process of two new patient experience surveys on rehabilitation care [new additions to the Consumer Quality Index (CQI), a family of surveys measuring patient experiences in Dutch healthcare (Delnoij et al., 2006)]. Rehabilitation care covers a variety of specialized care (such as physical, occupational and speech therapy), aimed at enhancing functional abilities of their patients. Also, the cause of the functional disabilities can be diverse, such as congenital disorders, (traffic) accidents, sports injuries or cognitive disorders. In the case of children and adolescents, the first two causes are most common in rehabilitation care. In the Netherlands, each year about 18,000 young patients (<18 years old) receive rehabilitation care, constituting about 23% of the total number of rehabilitation patients (Revalidatie Nederland, 2011).

(30)

Chapter 2: Organizing focus groups for young patients on rehabilitation care 29 (pre)adolescents (aged 12–15 years old). The division of these age groups was

based on developmental differences: in adolescence, young people become more autonomous and may hold different views than they did in their childhood (Kyngäs, 2004; Livingston et al., 2007). Because of these developmental differences, Dutch law actually makes a distinction between the responsibilities of healthcare providers regarding these age groups; adolescents should be more actively involved by healthcare providers in their care than children (Law on Medical Treatment Agreement, 1994).

To explore the preferences and experiences relevant to patients, focus group research can be used (Sofaer, 2002). Focus group meetings have proved to be a useful and suitable method for involving a specific group of people, such as patients who visit the same healthcare provider or suffer from the same specific illness (Krueger and Casey, 2000). Focus groups aim to provide an encouraging and safe situation for participants to freely discuss their experiences and opinions, for instance regarding healthcare. This is also true, with some modifications, for children and adolescents (Peterson-Sweeney, 2005).

More recently, online focus groups have also proved to be a popular and accessible way of involving patients in quality of care research (Moloney et al., 2003; Tates et al., 2009). An online forum might prove a useful alternative to a focus group meeting; participating in an online forum can be done from the comfort of one’s own house (or any other place that has an Internet connection), regardless the time of day. It also provides more anonymity. Adolescents are usually very familiar with online forums through social media. Nowadays, the same also applies more and more to children (Kennedy et al., 2003; Kenny, 2005). There have been encouraging experiences in using online focus groups for obtaining children’s and adolescents’ views on healthcare (Zwaanswijk et al., 2007; Tates et al., 2009).

In this article, we will present the organization and design of our focus group meetings and online focus groups. In doing so, we will discuss the usefulness and challenges of both types of focus groups in our research and aim to answer the following research question:

(31)

30 Numbers telling the tale?

Methods

Recruitment of participants

The research took place during summer 2011. Young patients from two Dutch rehabilitation centres were recruited to participate. They could choose to either participate in a focus group meeting or in an online focus group. Patients were eligible for selection based on their age (either 8–11 for the children’s groups or 12–15 years for the adolescents’ groups). Also, they should have had at least one appointment at the rehabilitation centre in the past 12 months. As the research involved minors, the invitation was addressed to the children and their parents. The invitation included an outline of the research for the parents and a specific letter for the young patients, which stressed the importance of the research and that participants would receive a gift certificate.

The meetings were situated at the rehabilitation centres. The aim was to organize two meetings per age group in both centres. Also, online focus groups were constructed, one for each age group. The online focus groups consisted of a 1-week online forum.

According to the Dutch Medical Research Involving Human Subjects Act this study did not require ethics approval. An explanatory statement of the governmental agency overseeing compliance with the Act, defines medical research as (in short): research aimed at acquiring generalizable results about diseases and health (aetiology, pathogenesis, symptoms, treatment regimes etc.). Also, research without subjecting people to certain procedures or requiring them to act in a certain way, is not medical research in the sense of the Act. However, informed consent was obtained from the participants’ parents.

Tailoring focus groups to children and adolescents

Although there are similarities, a focus group involving children or adolescents demands a slightly different approach than a standard (adult) focus group (Peterson-Sweeney, 2005).

(32)

Chapter 2: Organizing focus groups for young patients on rehabilitation care 31 potentially intimidating setting of a focus group meeting, we decided to form

smaller groups, aiming to recruit six participants for each children’s group and eight for each adolescents’ group. Also, the moderator and the assistant were alert to any stress, fear or agitation in the participants (Heary and Hennessy, 2002). In order to put the children and adolescents at ease, refreshments, cookies and crisps were available. The atmosphere of the meetings was kept as informal as possible (Peterson-Sweeney, 2005). Also, the participants were shown beforehand in which room their parent(s) would be during the meeting.

For adolescents, separate meetings were organized for boys and girls. Puberty marks a period in which adolescents become very self-conscious of themselves and their body. In order not to deter participants from discussing personal subjects such as relationships and sexuality, it was decided to organize same-sex focus groups (Heary and Hennessy, 2002; Wiegerink et al., 2006).

Design of focus group meetings

The meetings were designed by the WESP foundation. To start off, the moderator gave the participants a short outline of the meeting and stressed the confidentiality of what was said during the meeting (Horner, 2000; Heary and Hennessy, 2002). She also emphasized that the participants were ‘experts by experience’; the meeting was exclusively about their perceptions and there were no such things as ‘good’ or ‘bad’ answers. Participants should feel that their opinions matter and that their input is taken seriously.

(33)

32 Numbers telling the tale? After the interviewing sessions, there was a joint discussion about which aspects of rehabilitation care are important to children or adolescents. For example, the moderator asked the participants to imagine their rehabilitation centre wanted to know how well it performed according to their young patients. Which questions should the centre then ask the children or adolescents who are being treated there? The answers from the participants were written down by the moderator on a flipchart. To conclude the meeting, participants received a gift certificate, and an evaluation form about the meeting, which they could return by mail anonymously.

Design of online focus groups

For each of the age groups an online focus group (i.e. online forum) was organized, in the same manner as Tates and colleagues (2009). Applicants received the URL of the online forum and a personal username and password. Considerable attention was paid to making the texts on the website clear and comprehensible. Also, some rules of conduct were published on the site, for instance about language (e.g. no profanities) and anonymity. The forums were accessible for a week. Applicants received a reminder a few days before the research began and also on the starting day. On the first 5 days, a question was posted each day by the researchers. These were questions also included in the focus group meetings, as presented in the appendix (Wednesday’s topic being the concluding question of the focus group meetings). Participants were invited to answer the questions and to comment on both the questions and each other’s answers. The researchers monitored the discussion and asked additional questions if necessary.

Results

Response and participation

(34)

Chapter 2: Organizing focus groups for young patients on rehabilitation care 33 Feasibility and applicability of the focus groups

The duration of the meetings proved to be adequate. After 90-min, no new information was obtained from both children and adolescents. A shorter duration, however, would not have been sufficient to complete the programme.

Most children and adolescents were fully motivated to participate, some needed extra encouragement.

With regard to the children’s focus groups, a few children needed reassurance from the researchers, but most of them gradually showed more enthusiasm. The mutual interviewing strategy suited most participants. However, some participants resorted to literally repeating the answers their interviewing partner had just given, probably because of insecurity. But for the most part, participants went ahead enthusiastically and seemed to enjoy the experience. This was also reflected by the evaluation forms that were returned afterwards; nine children returned the form and seven of them stated they had liked participating (two said they ‘don’t know’).

With regard to the adolescents’ focus groups, the results were more extensive and detailed than those of the children’s. With regard to the group meetings, however, both boys’ groups proved to be far less informative than the girls’ focus groups. This was observable during the meetings, but was also reflected by the results; the girls’ lists of answers were far more extensive than those of the boys. Also, in one of the boys’ focus groups, participants sought to outdo each other by bragging, resulting in limited responses. Nonetheless, of the 10 evaluation forms that were returned, eight (including four boys) were positive about participating, the other two being neutral. Also, participants were asked on this form what they thought of the same-sex composition of the focus groups. Eight of them (five girls, three boys) stated they had no preference whether it was a same-sex group or a combined group. Only one girl preferred a same-sex group.

(35)

34 Numbers telling the tale?

Table 2.1 Response on focus group meeting sand online forums per centre and age

group

Invitations Applications Participants Mean age (range)

Meetings Children 229a 5 4 10.3 (8-12) 5 4 9.5 (8-11) 130a 7 5 9.5 (9-11) 2 (cancelled) Adolescents (f) 124 5 4 14.5 (14-15) 77 6 5 14.4 (14-15) Adolescents (m) 131 5 5 13.5 (13-15) 63 3 3 12.0 (12-12)

Online groups (> 1 post)

Children 229 4 3 ?

130 3 2 ?

Adolescents 255 11 5 ?

140 2 1 ?

a Applicants could choose from two dates.

Discussion

A limited number of children and adolescents participated in the focus groups or the online forums. It proved very difficult to recruit young patients, despite sending the invitations and reminders through the rehabilitation centres and mentioning the gift certificate, although some participants did mention the gift certificate as an extra reason to participate. It should be noted that our issues regarding response rates seem not uncommon for qualitative studies involving children (Goodenough et al., 2003; Kendall et al., 2003; Siebes et al., 2007).

(36)

Chapter 2: Organizing focus groups for young patients on rehabilitation care 35 although this would include only a part of the target population, i.e. the most

severely disabled children and adolescents.

Second, the number of treatments was very limited in the summer period; it was not possible to recruit

participants through the employees of the rehabilitation centres. Being asked to participate by their own physician or physiotherapist might have improved the participation rate of young patients.

Also, there are some possible reasons at a personal level. First of all, the letters were sent to the parents, and they might object to their child participating. Also, it is possible that a number of young patients did not feel the research was relevant to them, in case their rehabilitation treatment had ended, or if they did not regularly visit the centre. On the other hand, it may also be that rehabilitation treatment is highly relevant to patients, but they are reluctant to reflect on it, as was also suggested by Siebes and colleagues (2007). This may be because of the profound impact of their health problem on their daily life, such as it is. Especially for children, it may be quite a big step to visit the centre on a day off to talk to strangers about their rehabilitation.

We expected the online forums to be an effective way to involve adolescents in particular, as was found in previous research (Zwaanswijk et al., 2007; Tates et al., 2009). Tates et al. obtained a 23% response rate from paediatric cancer patients (8–17 years old). Despite following the exact same research methodology, the response on our online forums was much lower (2%) and actually proved to be lower than for the focus group meetings, despite all its advantages. A possible explanation for this is that participants in the Tates et al. study were more involved in their healthcare, cancer being such a serious and life-threatening illness. In rehabilitation care, patients are treated for a wide variety of illnesses and health problems, ranging from minor defects to complex trauma. Of course, in case of the latter, rehabilitation care also has tremendous consequences for the life of a child, so this explanation does not apply to all patients.

(37)

36 Numbers telling the tale? The setting of a focus group meeting suited most of the participants, but seemed a bit awkward for some of the children. However, the WESP strategy of letting participants interview each other did lead to active participation in the meetings. Letting children talk about their ideas and experiences during activities seems an appropriate and useful strategy. For more complete data collection, it could be considered to audiotape (or even videotape) the meetings, in addition to the written answers of the participants. These recordings could help to identify subjects mentioned by the participants that were not written down, but also more spontaneous remarks made by the children.

Also, it may have been useful to organize multiple sessions for each group. In this way, participants get to know each other and the researchers better, which will probably let them open up more.

Another suggestion is to perform individual interviews with the youngest patients (aged 11 years and younger). This would avoid the excitement of a group meeting and may make the children feel more at ease in discussing all subjects important to them. Also, this would give the researcher the opportunity to explore answers in depth. During the current meetings, this was not possible. Furthermore, it may be considered to interview children at a location of their own choice, for instance at home. The rehabilitation centre was a familiar location for most participants, but it may have raised negative associations for some of them.

With respect to the same-sex focus groups for adolescents, it is difficult to judge whether the usability of focus group results would benefit from mixed focus groups. On the downside, it might slow down the enthusiasm shown by the girls or increase the bragging by the boys. It should be noted, though, that this concerned an exceptionally young adolescents’ focus group (i.e. all three participants were 12 years old). On the upside, it might lead to more balanced results, and most participants stated they had no preference regarding the gender composition of the group.

Another reason for the same-sex groups was to make the participants feel comfortable enough to maybe even discuss the way their rehabilitation care affected relationships and sexuality (Wiegerink et al., 2006; 2011). This subject was not mentioned by any of the participants, however. The relevance of sensitive subjects, such as sexuality and social exclusion, could be investigated more thoroughly, perhaps by using a different strategy such as interviews (De Graaf and Rademakers, 2011).

Practical implications

(38)

Chapter 2: Organizing focus groups for young patients on rehabilitation care 37 in other healthcare disciplines. First, sufficient attention should be paid to

maximizing participation rates, for instance regarding planning, location and arousing the interest of young patients. Second, a single meeting is probably too short to create a sufficiently safe environment for young patients to voice their opinions, especially for children. Repeated meetings may provide them with more confidence and enable the researchers to explore topics more thoroughly. Also, individual interviews could be considered, in addition to focus groups.

The use of online focus groups in the current patient groups has not proved its value.

Conclusion

(39)

38 Numbers telling the tale? Appendix Questions for interviews in focus group meetings and

online focus groups Focus group meetings

Exploration

What is a rehabilitation centre? What people are there?

What are you doing in a rehabilitation centre? Of what use is a rehabilitation centre to a child?

At the beginning, what did you expect from visiting the rehabilitation centre? Did that come true? What did, what did not? Why was that so?

How could children be helped if there was no rehabilitation centre?

Feeling

What does a child feel like, visiting the rehabilitation centre for the first time? What did you feel like, then?

And how do you feel at the moment?

What helps if you feel bad when you are at the centre? What makes you feel comfortable when you are at the centre?

Opinion

What do you like about the rehabilitation centre? What do you like best?

What do you like least?

What would you want to take home with you? What would you remove?

What do you think of the building?

What do you think of the people who work there? What do you think of rehabilitating itself?

What do you think of the contact with other children?

Advice

What would the rehabilitation centre look like, if you were the boss?

How can the people at the centre make sure that children/adolescents are as comfortable as possible?

How can your parents make sure that you are as comfortable as possible? What can the other children/kids do?

What would you change about the building?

(40)

Chapter 2: Organizing focus groups for young patients on rehabilitation care 39 Online focus groups

Tuesday What do you think about the rehabilitation centre? What do you like best and

what do you like least?

Wednesday Imagine you get to choose a rehabilitation centre to visit for your treatment.

What would you want to know of each centre before you made your choice?

Thursday What helps if you feel bad when you are at the centre?

Friday How can the people at the centre make sure that children/adolescents are as

comfortable as possible?

Saturday What would you change about the rehabilitation centre, if you were the boss?

(This may be anything, for instance something about the building, about the people who work there, your own treatment…)

Sunday No new statement/question

(41)
(42)

Chapter 3: Measuring quality of dermatological care from patients’ perspective 41

3

Consumer Quality Index Chronic Skin Diseases

(CQI-CSD)

A new instrument to measure quality of care from patients'

perspective

This article was submitted as:

(43)

42 Numbers telling the tale?

Introduction

Chronic skin diseases, such as psoriasis, atopic dermatitis, and hidradenitis suppurativa, have a relatively strong, negative impact on patients’ physical, psychological and social functioning, and well-being (Rapp et al., 1999; Ongaene et al., 2006; Wolkenstein et al., 2007; Hong et al., 2008), i.e. patients’ health-related quality of life (HRQoL) (WHOQOL Group, 1993). Dermatological treatment may result in temporary suppression or remission of symptoms, but chronic skin diseases cannot be cured. Therefore, patients with a chronic skin disease require prolonged use of dermatological care. Needless to say, high quality of dermatological care is of paramount importance (Kirsner and Federman, 1997). To achieve a high standard of quality of care, patient-centred care is increasingly advocated (Groene, 2011). In addition to indicators based on expert consensus and clinical measures (Renzi et al., 2001; Augustin et al., 2008; 2011), patient satisfaction is considered to be a relevant indicator to measure quality of care from patients' perspective (Williams, 1994; Van Campen et al., 1995; Kirsner and Federman, 1997; Leung et al., 2009). Concerning psoriasis, patient surveys in the U.S.A. and in Europe (Krueger et al., 2001; Stern et al., 2004; Nijsten et al., 2005; Christophers et al., 2006; Dubertret et al., 2006; Wu et al., 2007; Augustin et al., 2008; Ragnarson et al., 2012; Van Cranenburgh et al., 2013a) have suggested that patients are dissatisfied with the management of their psoriasis, despite national and international treatment guidelines (Nast et al., 2007; Pathirana et al., 2009; Zweegers et al., 2011). Dissatisfaction can lead to poor adherence and consequently suboptimal health outcomes (Finlay and Ortonne, 2004; Renzi et al., 2011; Barbosa et al., 2012), whereas higher satisfaction is found to improve HRQoL (Renzi et al., 2005).

(44)

Chapter 3: Measuring quality of dermatological care from patients’ perspective 43 of patient experiences in healthcare is the Consumer Quality Index (CQ-index

or CQI) (Koopman et al., 2011). A CQ-index may consider a general level (e.g. CQI Healthcare and Insurances), a sector in healthcare (e.g. CQI Physiotherapy), a specific disease (e.g. CQI Diabetes), or a specific treatment (e.g. CQI Hip and Knee Replacement). A CQ-index consists of two questionnaires: one to assess patient experiences with respect to relevant quality aspects (CQI Experience) and one to measure the importance patients attach to these aspects (CQI Importance). We developed an Experience and Importance questionnaire regarding chronic skin disease care: CQ-index Chronic Skin Disease (CQI-CSD). This new instrument is intended to provide reliable information about patient experiences with dermatological care and to reveal differences between hospitals based on patient experiences.

The aims of this cross-sectional study were:

1. to evaluate the dimensional structure of the CQI-CSD;

2. to assess its ability to distinguish between hospitals according to patients’ experiences with quality of care;

3. to explore patient experiences with dermatological care and priorities for quality improvement according to patient; and

4. to optimize the questionnaire based on psychometric results and input of stakeholders.

Materials and Methods

Measurements

Questionnaire development

In concordance with CQI protocols (Koopman et al., 2011), the CQI-CSD was constructed in cooperation with various stakeholders: dermatologists, nurses, skin therapists, and psychologists specialised in dermatology, representatives of patient organizations and representatives of health insurance companies. Based on the literature and two focus group discussions with 13 patients we constructed a pilot version of the CSD: CSD Experience and CQI-CSD Importance. The development of the pilot version of the CQI-CQI-CSD is described in more detail elsewhere (Van Cranenburgh et al., 2013b).

CQI-CSD Experience

(45)

44 Numbers telling the tale? 'Never/Sometimes/Usually/Always'), 2 as a ‘problem’ item ('Not a problem/a small problem/a big problem') and 5 as a ‘global rating’ item ('0-10' or 'Definitely not/Probably not/Probably/Definitely'). Examples of items are included in Table 3.2.

The remaining 21 items were five skip-items to screen eligibility of respondents to answer specific items, 15 items on patients' background characteristics, and one item on questionnaire improvement. The questionnaire comprised the following sections: Healthcare provided by general practitioner, Accessibility of hospital, Waiting times, Hospital facilities, Information about care process, Healthcare provided by physician, Healthcare provided by nurses, Cooperation of healthcare providers, Information provision by healthcare providers, Patient participation, Safety, Global rating of hospital, Skin complaints, About the respondent.

CQI-CSD Importance

For each experience/problem item in the CQI-CSD Experience, a corresponding Importance item was formulated. Quality aspects represented more than once, such as conduct of dermatologist and nurse, were converted into one item, e.g. ‘How important is it to you that healthcare providers treat you with respect?’ (1=‘Not important at all’ to 4=‘Extremely important’). The CQI-CSD Importance consisted of 48 items.

Subjects and data collection

Three health insurance companies randomly selected 5,647 patients in 20 hospitals for whom costs of dermatological care were claimed between September 2011 and September 2012, according to previously identified declaration codes. These codes differentiate between diagnostic groups, but cannot distinguish between chronic and acute skin diseases. Inclusion criteria were: 1) one or more chronic skin disease diagnosis (self-reported), 2) healthcare received for this diagnosis during the past 12 months, 3) 18 years or older. We purposely included twenty hospitals with the highest patient volumes meeting our inclusion criteria, covering both academic and peripheral clinics in various regions of the Netherlands. We aimed to invite approximately 300 patients per hospital, based on the recommendation to invite at least 200 patients per hospital (Koopman et al., 2011) and our expectation that a proportion of patients would not meet our inclusion criteria (no chronic skin disease) due to our sampling strategy.

(46)

non-Chapter 3: Measuring quality of dermatological care from patients’ perspective 45 respondents. The second reminder included a paper version of the

questionnaire and a prepaid return envelope.

We randomly invited a subset of patients (one out of four) to complete the CQI-CSD Importance online immediately after they completed the CQI-CSD Experience online. We aimed to attain at least 150 completed CQI-CSD Importance questionnaires, as this number was assumed to provide sufficient information on importance at an aggregated level.

The study was conducted according to the Declaration of Helsinki Principles of 1983. The study was exempted for ethical approval, as research by means of once-only surveys that are not intrusive for patients is not subject to the Dutch Medical Research Involving Human Subjects Act.

Statistical analyses

Analyses were performed in SPSS 19.0 and MLwiN 2.02. All analyses were performed at a significance level of 0.05. For each analysis, we restricted analyses to patients with complete data on the particular variables involved. First, Chi square tests were performed to examine whether respondents differed from non-respondents in gender, age or diagnosis.

CQI-CSD Experience: dimensional structure

We performed Principal Component Analyses with oblique rotation, given the expected correlation between factors, after checking whether the following criteria were met: 1) Kaiser-Meyer-Olkin measure of sampling adequacy (KMO) >0.60, and 2) Bartlett's test of sphericity. These criteria were not met when analysing all items simultaneously. Therefore, we performed analyses for each questionnaire section separately. The number of factors was determined by Kaiser's criterion (Eigen value) (Kaiser, 1960) and scree plots. Factor loadings of items had to be 0.3 or higher for items to belong to a factor (Floyd and Widaman, 1995).

To evaluate the reliability of each scale, we calculated Cronbach's α and accepted α≥0.60 according to criteria of Cohen (Hammond, 1995). To obtain insight in the multidimensionality of the questionnaire, we calculated inter-scale correlations. Pearson’s correlations of <0.70 indicate that the constructed factors can be seen as measuring separate constructs (Carey and Seibert, 1993).

CQI-CSD Experience: discriminative power

Referenties

GERELATEERDE DOCUMENTEN

How retail service elements influence a customer experience is discussed first, followed by tangible and intangible service elements, the elements representing

In this research, the two central questions are “To which degree do people attending an event make use of Twitter?” and “What is the effect of the customer use of Twitter

This study proposes that network diversity (the degree to which the network of an individual is diverse in tenure and gender) has an important impact on an individual’s job

Collaborations of care providers, (representatives of) people with multiple chronic conditions and researchers need to develop appropriate methods and measures to include

As mentioned before, the underlying value of the primary brand elements is its set of associations (Aaker, 1991). The new primary brand elements should improve the already

experience surveys, its relationships with three other constructs used for summarizing survey results were assessed: a global rating, the CQI recommendation question and an

A large part of these seemingly substantial differences in ranking were due to clustering of scores, in which case a negligible difference in score Table 4 Associations between

The present study describes the ability of patient experience surveys to discriminate between healthcare providers for various patient groups and quality aspects, and reports the