• No results found

Performance measures: outcome and output measures for the Community Health Centre and Dental Clinic of Victoria Cool Aid Society

N/A
N/A
Protected

Academic year: 2021

Share "Performance measures: outcome and output measures for the Community Health Centre and Dental Clinic of Victoria Cool Aid Society"

Copied!
117
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

.

P

P

E

E

R

R

F

F

O

O

R

R

M

M

A

A

N

N

C

C

E

E

M

M

E

E

A

A

S

S

U

U

R

R

E

E

S

S

O

UTCOME AND

O

UTPUT

M

EASURES FOR THE

C

OMMUNITY

H

EALTH

C

ENTRE AND

D

ENTAL

C

LINIC OF

V

ICTORIA

C

OOL

A

ID

S

OCIETY

F

F

e

e

b

b

r

r

u

u

a

a

r

r

y

y

2

2

5

5

,

,

2

2

0

0

0

0

3

3

(2)

TABLE OF CONTENTS

0. Executive Summary ... 4 1. Introduction ... 9 • Purpose ... 10 • Overview ... 11 2. Background ... 13

3. Historical Context Of Performance Measures ... 17

4. Literature Review ... 21

• Defining Performance Measures ... 21

o Inputs... 22

o Activities ... 22

o Outputs ... 22

o Outcomes ... 23

• Problems and Strategies in Performance Measures ... 26

o Challenges ... 27

o Strategies ... 28

• Medical Field ... 31

o Stakeholder Consultation ... 32

o Time Factor and Sample Size ... 35

o Value of Medical Procedures: Outputs vs Outcomes . 36 o Established Health outcome Indicators ... 39

• Guidelines For Establishing Performance Measures ... 41

o Clear Mission and Goals ... 42

o Define Objectives ... 44

o Develop Measures ... 45

o Stakeholder Consultation ... 47

o Measures and Objectives ... 48

5. Methodology ... 50 • Research Design ... 50 • Research Approach ... 50 • Interviews ... 50 • Feedback ... 52 • Methods... 52 o Limitations ... 53 6. Findings... 54

• Themes from Interviews ... 54

o Contractual Agreement ... 54

o Data Collection ... 56

o Methods of Measuring Performance ... 58

7. Discussion ... 61

• Purpose ... 61

(3)

o Efficiency ... 66

o Customer Satisfaction ... 67

o Continuity ... 68

o Effectiveness ... 69

• Tools Available for Measuring Dimensions ... 71

• Time Consideration for Measures ... 74

8. Performance Measures for CHC, Dental Clinic and HIV Patients . 75 • Community Health Centre ... 76

• Dental Clinic ... 96

• HIV Patients ... 100

9. Recommendations ... 102

• Implement the Performance Measures ... 103

o Formulate a Database for Tracking Changes ... 103

o Develop Baselines and Benchmarks ... 104

o Set a Time Line for Measures ... 105

o Appoint an Administrator ... 106

ƒ Review and Evaluate Existing Mission ... 108

ƒ Make Use of Performance Measures ... 109

ƒ Concluding Remarks ... 110

Bibliography ... 111

List of Appendices Appendix A VCAS Holistic Approach to Client Care ... 115

Appendix B Contract of measures between CHC and VIHA ... 116

Appendix C Sample Health Indicator Survey ... 117

Lists of Figures Figure I Program Performance Measure Model ... 25

Figure II Logic Model of If/Then Statements in Health Sector ... 47

Figure III Conceptual Framework for Logic Model ... 63

Lists of Tables Table 1 Performance Measurement Logic Model for CHC ... 76

Table 2 Performance Measurement Logic Model for Dental Clinic ... 90

(4)

E

XECUTIVE

S

UMMARY

O

VERVIEW OF

C

LIENT

Socially excluded, homeless and street-involved individuals frequenting Victoria's downtown core can find refuge through a number of programs offered by the Victoria Cool Aid Society (VCAS), a social service agency with a long-standing presence in Victoria. Two programs offered by VCAS that encompass a multi-disciplinary approach to help these individuals are the Community Health Centre (CHC) and Dental Clinic.

The overall goal of CHC and Dental Clinic are to provide a continuum of interdisciplinary treatment and primary care for complex multi-diagnosed patients, including those with HIV, hepatitis C, mental health problems and complex medical disorders. Service providers are involved in harm reduction with a holistic health care team approach to those who have no other options. Focus is towards better physical and mental health, as well as a connection to social stability and security. The programs were developed to help the most vulnerable in society by creating a range of immediate to long-term services that build hope, lives and community. Services encompass a multi-disciplinary approach to health care. In addition to treating patients, physicians teach staff about mental illness and develop guidelines governing treatment principles. CHC advocates for people who face homelessness, poverty, stress and other crises by working closely with other downtown agencies to provide a holistic client-centered approach to treatment. CHC and Dental Clinic's targeted client base are homeless, transients and refugees, living in or frequenting Victoria's downtown core that may or may not have

(5)

medical coverage, who either already receive service from VCAS, or want the services provided.

O

BJECTIVES

Although CHC has kept records of outputs over the past number of years, it does not measure service outcomes. For the current fiscal year, Vancouver Island Health Authority (VIHA), one of the major funders of VCAS, is requiring performance measures as part of its contractual agreement. This research project has three goals: to satisfy VIHA on quality dimensions established by the contract between VIHA and VCAS; to provide VCAS Board of Directors with a clearer understanding of service outcomes; and to emphasize the services provided to patients who are HIV positive. The end result will be the establishment of current outcome and output measures for services provided by CHC and the Dental Clinic to satisfy the three goals.

M

ETHOD

The research involved primary data collection through personal interviews and secondary data through CHC documents and existing computer reporting systems. Analysis was qualitative in nature. Approximately 25 interviews were conducted with CHC, VIHA, other health clinics and governing bodies to get their perspectives on the definition of the quality dimensions; to find out what kind of outcomes/outputs is considered satisfactory; and to determine the best approach for collecting data to be used in developing measures. There were five types of interviews: introductory, practical information gathering,

(6)

opinion seeking, exploratory and feedback. Most interviews were conducted in person; a few were by telephone and written responses.

Introductory interviews involved meeting with administrators of CHC and VIHA to identify their views on required performance measures. Practical information gathering interviews were used to discover how data was assembled. Staff were asked to show how data was assembled and provided copies of forms and other data collecting instruments. Opinions from staff were sought on what they thought was the best way to develop performance measures in terms of how to gather data, from whom, and on what subject. Exploratory interviews were undertaken in combination with the literature research which determined that there was no available performance measures based on clinics comparable to CHC. Working copies of the research project were provided periodically to CHC and VIHA for feedback.

All interviews consisted of open-ended questions, which varied depending on the person's employment position and the type of interview. All interviews had a prelude through either an email or telephone call, or both, describing the nature of the interview and suggested information that the researcher wished to gather. A list of what would be included in the interview was provided before each interview so that the respondents had time to assemble information and ideas. Furthermore, all respondents were given the opportunity to contact the researcher at a later date, in case they wanted to add additional information or clarifications that would be helpful in establishing performance measures.

(7)

Respondents consisted of a non-random sample of the executive directors, administration from VCAS, doctors, computer personnel, and other staff, plus managers from VIHA, James Bay Community Health Clinic, the Ministry of Health Services and Reach Outreach services in Vancouver.

Information from interviews was used for three purposes. The first was to establish the structure for the project in terms of CHC expectations, the second was to develop a conceptual framework and the third was to develop performance measures.

R

ECOMMENDATIONS

To establish current outcome and output measures for services provided by CHC and Dental Clinic for the 2002/2003 year and to develop a base for performance measure for subsequent years, there are three areas of recommendations. The first recommends a strategy for implementing the performance measures designed specifically for CHC, the Dental Clinic and for HIV positive patients, which entails re-formatting the database so that it is compatible with performance measures, developing baselines and benchmarks, setting a time line for implementation, and appointing an administrator responsible for performance measures.

The second recommends that CHC review and re-evaluate existing mission and goals to formulate clear, coherent mission, strategies and objectives. It entails re-visiting the mission to ensure that it accurately reflects the mandate, goals and quality dimensions. The third is to extend the use of performance measures beyond the purposes of the

(8)

contract. The requirement for performance measures was initiated by VIHA to satisfy their need for accountability; however the ensuing results of performance measures will be improved program services.

(9)

1.

I

NTRODUCTION

Socially excluded, homeless and street-involved individuals frequenting Victoria's downtown core can find refuge through a number of programs offered by the Victoria Cool Aid Society (VCAS), a social service agency with a long-standing presence in Victoria. Two programs offered by VCAS that encompass a multi-disciplinary approach to help these individuals are the Community Health Centre (CHC) and Dental Clinic.

One of the five BC regional health authorities funded by the Ministry of Health Services, Vancouver Island Health Authority (VIHA), is a principal funder of these programs. VIHA is requiring performance measurements, with an emphasis on outcome measures as part of the contractual agreement with CHC. Requirements for outcome measures that demonstrate the value of the services provided are becoming the norm for most publicly funded agencies. This norm began a number of years ago in BC with discussions that eventually led to the Budget Transparency and Accountability Act (2001), but has been intensified by the current provincial government initiative to ensure that programs provide valuable services that are sustainable. In the spirit of providing efficient, effective, accessible, responsive and sustainable services, VIHA and the VCAS Board of Directors require outcome and output information on a number of criteria including seven quality dimensions: access; efficiency; customer satisfaction; appropriateness; continuity; risk management; and effectiveness.

Although CHC has kept records of outputs over the past number of years, it does not measure service outcomes. For the current fiscal year, VIHA is requiring outcomes as

(10)

part of its contractual agreement. Since March 2002, CHC has been working with VIHA to develop satisfactory quality dimensions, outcomes, outputs and measures. As a result, a number of these elements are in various stages of development, and there may be some that satisfy the objectives of VIHA, although it is not clear at this point, which, if any, are satisfactory. CHC and Dental Clinic provide services to a number of HIV positive patients under unique circumstances. Establishing separate outcomes, outputs and measures for these services will be instrumental in elucidating the extent of these services.

P

URPOSE

This project will research and create measures that reflect the outcomes and outputs of patient services provided by CHC and the Dental Clinic. The measures have three goals:

¾ To satisfy VIHA on quality dimensions established by the contract between VIHA and VCAS.

¾ To provide VCAS Board of Directors with a clearer understanding of service outcomes.

¾ To emphasize the services provided to patients who are HIV positive

To achieve these goals, the quality dimensions will be reviewed for clarity and redefined, and modified if necessary. Ideally, outcomes will be established for each quality dimension, but in instances where this is not feasible, outputs will be established instead. Detailed information relating to outcomes and outputs of services provided for patients

(11)

who are HIV positive will be provided. The end result of this project will be the establishment of current outcome and output measures for services provided by CHC and the Dental Clinic to a level satisfactory to VIHA and the VCAS Board.

O

VERVIEW

This project is divided into sub sections for clarity and ease of reference. The Background section provides a brief synopsis of VCAS. The Historical Context of Performance Measurement section illustrates the circumstances that brought about the need for performance measurement within CHC. The Literature Review Section begins by defining performance measures, then discusses the problems and strategies in establishing performance measures, and examines what other health care centres have learned from attempts to establish performance measures. It incorporates practical guidelines presented by the experts in the field then ends with suggested guidelines for establishing performance measures.

The Methodology Section explains the processes involved in researching and preparing this project and discusses limitations of the research. The Findings Section relates the research project's objectives with the findings from the interviews. The Discussion Section proposes the quality dimensions that need to be measured and presents instruments available for measuring. The Performance Measures Section establishes measures and logic models for each of the three programs: CHC, Dental Clinic and HIV patients. It further provides an implementation strategy that CHC can use for optimal benefit from the measures set out in the logic models. The Recommendation Section sets

(12)

out steps to implement effective performance measures and gives specific recommendations and a timeline for the three programs.

(13)

2.

B

ACKGROUND

A group of volunteer physicians who were dedicated to providing medical care to those people who did not have coverage, including transients, those living in communes and travelling university students, established a free clinic in 1970 with funding from the provincial government and VCAS. The patients were generally young and healthy with minor medical problems. Over time, much has changed and the medical clinic was replaced by CHC which has an expanded mandate and mission (VCAS webpage, 2002).

CHC now employs the fulltime equivalent of 20 staff, which consists of: • 4 doctors

• 3 nurse practitioners

• 1.5 social worker/drug and alcohol counsellors • 1 pharmacist

• 1.5 pharmacist assistants • 3 medical office assistants • 1 acupuncturist

• 1 dentist

• 2 dental assistants • 1 hygienist

• 1 dental program coordinator.

In addition, there are a number of visiting specialists and nurses who attend CHC periodically. The medical staff are particularly skilled in treating clients with mental

(14)

illness, dual diagnosis, HIV infection, chronic illness, and substance abuse behaviours (Queenswood, 2002).

Funding now comes primarily from two sources: VIHA and the Ministry of Health Alternative Payments Branch, of about 1.7 million per year. Provincial funding to develop the clinic into a comprehensive community health centre was received in 2001 with a mandate to provide a continuum of interdisciplinary treatment and palliative health care services using a holistic health care team approach. Focus is towards better physical and mental health, as well as a connection to social stability and security. The programs are developed to help the most vulnerable in society. Services encompass a multi-disciplinary approach to health care. In addition to treating patients, physicians teach staff about mental illness and develop guidelines governing treatment principles. CHC advocates for people who face homelessness, poverty, stress and other crises and works closely with other downtown agencies to provide a holistic client-centered approach to treatment (VCAS Contract, 2002). The mandate of CHC is to provide a continuum of health care to people who have no other options, and their contract specifies that the clientele are "individuals living in or frequenting Victoria's downtown core who may or may not have medical insurance coverage. This includes transients, refugees and people with complex multiple health issues, chronic and communicable diseases" (VIHA Contract, 2002). There is a high degree of patients’ use of other VCAS programs, particularly the shelter, housing and outreach social services (Queenswood 2002; VCAS webpage, 2002).

(15)

Services from CHC are for 50 hours per week so as to maximize the delivery of services at homes of clients, on city streets and in other community service agencies, during convenient times for clients. (VCAS Contract, 2002) Approximately 2500 patients receive a total of about 12,000 treatments per year.

As part of the continuum of services in 2002, VIHA provided funding for CHC to establish a dental clinic to serve the same-targeted client base so that those without other means could be ensured of receiving adequate dental care. The dental clinic, along with CHC is located in downtown Victoria, in the same VCAS-owned building as the emergency shelter and the social housing services. The diagram in Appendix A illustrates how, once a client is introduced to VCAS there is a continuum of holistic treatment available through a number of services provided directly by VCAS, and through referral to other services in the community.

Targeted client base1 include many who have dual diagnosis with chronic medical

conditions. Less than two years ago, CHC began offering services specifically for HIV positive individuals who need the special care and attention demanded by the unique circumstance of their condition. There were initially 40 patients, but the number has more than doubled to over 100.

The current contract between VIHA and CHC requires the development of performance measures. Although CHC has kept records of outputs over the past number of years, it

1

The targeted client base are all those living or frequently downtown Victoria who need medical services, including those who are transient and who do not have medical coverage. For ease of reading this group will be called clients, regardless of whether or not they are actual patients of CHC.

(16)

has made no coordinated effort to measure outputs against inputs and does not measure service outcomes. Establishing separate outcomes/outputs and measures for these services will be instrumental in clarifying the extent of these services.

(17)

3.

H

ISTORICAL

C

ONTEXT OF

P

ERFORMANCE

M

EASUREMENT

Internationally, the movement towards performance measurement arose from an increased demand for public scrutiny. The most prevailing reason for measuring performance is that citizens are continually demanding more responsive, accountable and competitive services. Requirements for outcome measures that illustrate the value of the services provided are becoming the norm worldwide for most publicly funded agencies. Consumers of services and volunteers who provide services want to know that programs to which they devote their time really make a difference. Although accountability initiated the performance measurement movement, the ensuing results of improved program services is responsible for the growth of the movement (United Way, 2002; Jones, 2000).

Concerted efforts to improve government performance and enhance public confidence through better planning and reporting has been under way in many western countries. In the mid 1970’s, it was a common belief amongst the American general public that the administration of the government was synonymous with excessive waste, inefficient use of resources and ineffective outcomes (Ammons, 1995). In response to this belief, American bureaucrats sought ways to become more effective while demonstrating their improved performance to the public. The success of individual governments who had adopted performance measurement, coupled with the repeated and persistent calls from the public and politicians for improvements in both performance and accountability reports, prompted the United States Congress to enact the Government Performance and

(18)

identify goals and to report their results in achieving these goals (United States Accounting Office, 1997). Implementation expectations include making government more result oriented; increasing public awareness of the efficiency of state government programs; facilitating informed decision-making on the allocation of state resources; and increasing accessibility to information on state government programs.

Non-profit groups became interested in performance measurement when government and other benefactors made their revenue supply contingent upon performance. Because non-profit service providers compete with each other for the limited resources from the major funding agencies, those who demonstrate effective program results have an advantage over those who do not.

British Columbia responded to demands for public scrutiny with a strong internal policy that involved the creation of internally generated accountability demands, directives and initiatives aimed at enhancing public sector performance. Accountability demands conveyed the notion that public sector institutions and those decision-makers who lead them must be held responsible for what they do and be answerable to the public. As the public becomes increasingly sensitive and vocal about the cost of public sector services and programs, accountability demands are intensified and broadened into a wide range of demands for clear evidence of quality, efficiency and effectiveness. In its publication entitled Enhancing Accountability for Performance in the British Columbia Public Sector

(1995), the Government gave direction to its staff to develop an implementation strategy

for all departments.The Auditor General (1993) stated that “if informed choices in regard to the maintenance of rationalization of government programs are to be made, elected

(19)

officials and the public at large must have access to reports which reflect the true performance of programs…the information which indicates how well managed and how effective programs are.” This statement was made at the precipice of an escalating movement that includes the Budget Transparency and Accountability Act (2001) and the Balanced Budget Act (2001). The movement has been intensified by the current provincial government initiative to ensure that value is received and services are sustainable. Each ministry is responsible for reporting its performance. For instance, the Ministry of Health Services provides medical care to the province through contracts with five regional authorities, and must establish performance-based measures. In turn, each health authority must provide performance indicators (Performance Agreement, 2002). Similarly, agencies receiving funding from VIHA must likewise prepare performance measures (VIHA Contract, 2002). CHC receives funding from VIHA; consequently, an annual report of service performance is required. The 2002/2003 year marks the initiation of performance measure requirements.

Seven quality dimensions were identified in the VIHA Contract (2002) as being key to effectively measure the performance: access; efficiency; customer satisfaction; appropriateness; continuity; risk management; and effectiveness. Objectives, outcomes and measures were itemized to correspond with the quality dimensions as found in Appendix B. Although part of the contract, the elements of Appendix B are not a fait accompli; rather, VIHA had been working with CHC for several months to develop satisfactory elements that would measure service performance. In the end, CHC decided to hire a researcher to develop performance measures. Instructions to the researcher were

(20)

that the measures should be based on the elements presented in Appendix B, with an emphasis on outcomes that are clear, feasible and easy to implement and understand. In addition, the researcher should try to provide measures that show how CHC saves VIHA money through reduced admissions to emergency wards of local hospitals and reduced number of days clients spend in the hospital. Services to HIV positive patients should be measured separately, with more emphasis on medical outcomes. The researcher will present a draft project to CHC and VIHA for feedback that will be used in the final project. The next section is a comprehensive review of the literature in terms of what the experts in the field say about performance measures, with particular attention to medical outcomes and guidelines for establishing performance measures for non-profit health service providers.

(21)

4.

L

ITERATURE

R

EVIEW

To understand how performance measurement works and gain some insights as to what is being done with regards to outcome/output measures in the health field, this section starts by providing a definition of performance measures followed by discussions on the limitations and subsequent strategies used to compensate for the limitations of performance measurements. Then it examines what other health care centers have done, with particular emphasis on the lessons they learned from first attempts to establish performance measurements. Finally, guidelines to establishing performance measurements will be presented.

D

EFINING

P

ERFORMANCE

M

EASURES

Performance measurement is the regular, on-going collection of specific information regarding the results of services that indicate the kind of job being done and includes the effect the services have on a community. Together with benchmarking and continuous improvement, performance measurement forms the nucleus for managing for results (Fairfax, 2002; Ammons, 1999). The Auditor General of BC (2001) defines performance measurement as a way of assessing aspects of an organization's performance. It reduces the uncertainty as to whether a program has met its objectives and measures the extent to which intended results have been achieved. Feedback on performance measures are generally used for two purposes: making changes in the way things are done to produce outcomes that are more desirable and reporting results to shareholders (McDavid, 2002).

(22)

Performance measures are the specific data used in performance measurement. They measure inputs, activities, outputs and outcomes (United Way, 1996).

Inputs

Resources used in the process of providing services are the input of the service. They include expenses, salaries, facilities, use of volunteer time, equipment and supplies (United Way, 1996).

Activities

Activities, or process, are what an organization does with the inputs with respect to fulfilling its mission. In other words, it is the course of action the program takes which uses its resources, such as education, counselling and therapy (Legowski, 1999; United Way, 1996).

Outputs

Outputs are the direct products of program activities. Activity indicators measure quantifiable outputs and the work associated with the delivery of a service such as the number of patients treated or the number of applications processed. They also measure the quantity of services such as the number of patients treated and discharged, but rarely provide any useful assessment of the quality of service delivery. For instance, indicators such as patient contacts, the number of counselling sessions and the number of professional staff do not address whether there were any health impacts on the medical

(23)

conditions of the patient. Number of treatments does not necessarily mean improved health or physical well-being (Fairfax County, 2002; United Way, 1996).

Outcomes

Effectiveness indicators measure outcomes by evaluating the extent to which objectives have been achieved. They focus on areas difficult to measure such as change in behaviour and attempt to show that the program caused the change in behaviour and to what extent the behaviour had been changed. Outcomes are the impact of the program on the client or the environment such as the number of discharged patients who are capable of living independently (McAdams 2001). Measuring effectiveness and quality of outcomes usually requires sophisticated research tools for continual monitoring. They are often developed through an iterative process, take time, are limited by the ability to control environmental influences, are resource intensive and time consuming (Ryan 2000). There is more emphasis on outcomes that systematically include indicators of results, efficiency and effectiveness and provide answers to these three questions: What was achieved? How cost effective was the work done? How were stakeholders helped by the efforts? (Fairfax County, 2002; Ryan, 2000; United Way, 1996).

Outcome measures are concerned with immediate outcomes during service delivery and long-term outcomes, which follow service delivery. Immediate outcomes are usually not the ends themselves but are expected to lead to the desired ends. Long-term outcomes frequently take years to measure and require clearly established baselines and benchmarks. Although outcome measures may also include numerical counts, which are

(24)

simple and often associated with standardized measures, which are developed and validated for specific, client populations, these simpler measures, are usually not sufficient in themselves. In many programs, a progression of outcomes occurs until the most desirable, ultimate outcomes are achieved. The ultimate outcomes are those results that reflect the organization's mission, and directly relate to its effectiveness (Cutt, 1995; Ryan, 2000; United Way, 1996).

Measuring outcomes is an evolving process that requires several iterations before valid, reliable and useful performance measures are established (United States Accounting Office, 1997). Management consultants, Kevin Nagel and James Cutt (1995) customized a Strategic Plan for the then Ministry of Transportation and Highways that was adapted from several existing plans from other jurisdictions. The plan suggests that successful performance measurements require long range planning of about three years to develop and implement a comprehensive set of performance indicators (Jones, 2000; Kravchuk, 1996). Key to success is selecting appropriate outcome measures and analyzing their results (Nagel, 1995).

When measuring the before-after differences for either outputs or outcomes, attention should be paid as to whether the observed changes and trends are consistent with the intended effects. Care should be taken to ascertain if changes are due to the service, or some other factor (McDavid 2002; Sinclair, 2001). Figure I illustrates the flow of the elements of performance measures.

(25)

FIGURE I PROGRAM PERFORMANCE MEASURE MODEL

(Adapted from United Way, 1996)

In summary, performance measures consider the inputs which are the resources used by the program, the activities which are what the program does with the inputs. Outputs are the level of service provided by a program that includes the quantity of programs that can be measured by numbers, such as percentages or ratios. Outcomes are the end products of the activities. Outputs and outcomes measure the success of performance. Outcomes are the results of the programs such as the change in behaviours or attitudes in an attempt to determine if programs really make a difference in the lives of people (Ryan, 2000; United Way, 2002; Eckert2001; Perrin 1998; Auditor General, 2001; United Way, 1996).

Inputs Activities Progressive Outcome Outputs Immediate Outcomes Ultimate Outcome Long-term Outcome

(26)

P

ROBLEMS AND

S

TRATEGIES IN

E

STABLISHING

P

ERFORMANCE

M

EASURES

Performance measurement in the public sector is not easy. Whereas private-sector companies have the convenient, universal, and succinct bottom line, public sector organizations often have amorphous goals and missions coupled with unique circumstances. Three major overarching challenges that obstruct those involved with performance measures: perversity, external factors and abstract goals will be examined. These major challenges are multifaceted, so, consequently need multi-variant solutions. Strategies for addressing the challenges will be discussed.

Challenges 1. Perversity

The first challenge consists of two basic forms of perversity that threaten validity (McAdams, 2001). The first form of perversity arises when the data collectors do something wrong, such as: choosing plausible, but meaningless measures; overwhelming the client with huge quantities of hard-to-understand information: eliminating or changing measures when the results are undesirable; managing the measure instead of the mission; or setting easy targets. The second form of perversity arises when the information users do something wrong, such as: incorrectly blaming or crediting outcomes to the program; reactive decision-making; ignoring the information; using unreliable information; imposing controls that hamper good management and blaming the service provider for outcomes over which they have no control (McAdams, 2001).

(27)

2. External Factors

The second challenge is separating the program's impact from the impact of external factors (United States General Accounting Office, 1997). Changes in behaviour are difficult to attribute to any one factor or program because most outcomes are affected by external factors beyond the control of the program. Evaluating whether success or failure is attributable to a program is difficult and sometime not possible (McAdams, 2001; Nagel, 1995). Proponents of performance measurement acknowledge that it can be extraordinarily difficult to identify success of program goals and objectives, and there may never be sufficient evidence to conclude that the program or some other factor caused the outcome. Consequently drawing conclusions about causation usually involves a level of professional judgement (McDavid, 2002; Kravchuk, 1996).

Attempts to isolate influence of external phenomena from the program prove to be costly and as a result, in most instances, it is simply too difficult and too expensive to establish a direct linkage between an organization's efforts and the impact of those efforts on the organization's mission. The benefits do not justify the high cost of collecting and analyzing the data (GOA, 1997; McAdams, 2001). Outcomes are often only observable over long periods of time, so that it may not be feasible to determine results over a period of a year or two. For instance, health and well-being of a population is measured over long periods of time, which presents two intrinsic problems. First, yearly outcomes cannot be extrapolated from measures that take a decade or more to measure and second, results based on decades of observations are just about impossible to be validly attributed to one or two programs (Enns, 2000; Nagel, 1995).

(28)

3. Abstract Goals

The third challenge relates to ambiguous mission statements. By their nature, measuring success of non-profit service organizations is very difficult due to abstract and sometimes lofty goals such as alleviating human suffering (Sawhill, 2001). O'Neill and Young (1988) stress that ambiguity of the performance criteria for non-profits dampen the level of mission measurement within the sector. For instance, The American Heart Association, whose mission is reducing disability and death from cardiovascular disease and stroke collects comprehensive data about both these illnesses, but has thus far been unable to link its activities to changes in these measures or to use them to manage performance and revise its strategy (Sawhill 2001). The more abstract the mission the more difficult it is to develop meaningful measures of outcomes or mission impact. According to Sawhill (2001), measuring mission success is like the Holy Grail for nonprofits--much sought after, but never found.

Strategies for Addressing Challenges 1. Perversity

There are varieties of methods used to limit and/or overcome these challenges. To limit the effects of the first form of perversity, data collection should be closely linked with the objectives and strategies of the program. It must be accurate, specific, realistic, clear, result oriented and only collected for a stated and specific purpose. Reducing the number of measures and weeding out inappropriate indicators will improve results (Hertzlinger, 1994). Establishing baselines and benchmarks for internal and external use will reduce the temptation to change measures when results are undesirable. To understand their use,

(29)

measures should explicitly identify their inherent strengths and limitations. Simple, easy to understand measures are usually the most successful and can be visually displayed in a logic model (Fairfax, 2002; Ryan, 2000; McAdams, 2001; Perrin, 1998). Emmanuel et al (2002) recommends employing an iterative process in which relatively easy problems are addressed first, so confidence and credibility are achieved prior to addressing ones that are more difficult.

The second form of perversity can be reduced if stakeholders are encouraged to be involved at all levels. A more inclusive approach will increase acceptability, understanding and fosters a more positive environment. Creating a common understanding of what performance measures are and how findings are interpreted will negate the natural tendency to resist change, wrongfully blame or unduly credit (Kravchuk, 1996). Consider who is likely to make use of the indicators and then develop indicators that could assist with program development, yet are general enough to minimize misuses (Perrin, 1998).

2. External Factors

External factors can confuse program results. Proving that there is a causal link between the program and an observed behaviour may not always be possible. One way to reduce uncertainty is by using multiple indicators to examine a variety of aspects including outputs and outcomes that can indicate the relationship between the program and changed behaviour. To do this, a number of tests are essential to check for accuracy and confirm the meaningfulness of data provided. According to Perrin (1998), performance measures

(30)

used in isolation do not provide meaningful accountability, or accurate outcome information.

Professional judgement and plausible reasoning are key to determining whether results are likely related to a program. A good illustration of an organization that uses these key determinants in a unique way when measuring performance is the American Cancer Society. The Society's mission is to eliminate cancer as a major health problem by preventing cancer, saving lives and diminishing suffering from cancer. Yet they have not attempted to determine whether the reductions in cancer rates are due to their efforts, or other factors, and by how much. Rather than using resources to measure their impact on cancer rates, the American Cancer Society prefers to restrict spending to services that are known to decrease the rates of both incidences and number of deaths from cancer.

Stakeholders in the American Cancer Society are apparently not interested in trying to determine their overall level of success in reducing cancer, primarily because the costs to do so would be prohibitive, but are satisfied that the mission is being fulfilled and the rates of cancer are decreasing (Sawhill, 2001). In other words, they know that their activities likely have some impact on the success of their mission goals and their mission goals are being achieved. Instead of measuring how much of the achieved goals are related to external factors, they prefer to put their limited resources into activates known to produce positive results. Evaluators of other programs may find this method useful when making links between programs and the likelihood of achieving intended outcomes (Auditor General, 2002).

(31)

3. Abstract goals

Limiting the effects of mission confusion is possible. There is a common denominator in many organizations that report the greatest success: they all developed measures that are specific, actionable and contain measurable goals that bridge the gap between their lofty missions and their operating objectives. Rather than spending inordinate amounts of resources on efforts to measure their missions, these organizations have concentrated on identifying and then achieving goals that will move them in the direction of mission success, then assessing their progress against the goals. Some organizations found success only after they focused and narrowed their mission statement (Sawhill 2001; Fairfax County Measures Up, 2002).

M

EDICAL

F

IELD

Outcome data help guide clinics in patient assessment; inform physicians by pointing to possible areas for improvement; update patients of outcomes they can expect; and enables providers to be accountable to funders and patients by demonstrating the value of their services (Woodward 2001; Hoelzer 2001). In order to assist performance measurement initiatives the U.S. Department of Health and Human Services (1998) embarked on a process to establish performance measures for all of its health and human service programs. The purposes were to clarify goals, document the contribution toward achieving those goals, and document the benefits received from the investment in each program (United Way, 2002; Hendrick, 2002; U.S. Department of Health and Human Services, 1998). Major lessons learned by those who have led the way in health performance measurement have been classified in four categories: stakeholder

(32)

consultation, time needed to measure outcomes, medical procedures and establishing health outcome indicators. Each category is discussed in some detail.

Stakeholder Consultation

The first category involves stakeholder consultation. The previous section explained that for performance measurement to work, meaningful consultation with stakeholders is crucial because effective performance measurement efforts are based on a partnership with stakeholders; this section will provide details of its importance specific to the health sector. Key stakeholders are funding agencies, community and patients. Funding agencies need to be consulted to insure that the flow of money and services satisfies both the funder and program deliverer. The community also has a role inasmuch as external factors unrelated to health care services influence outcomes. Active community participation and engagement in addressing certain health outcomes is essential so that there is a strong interrelationship between successful community health initiatives and the successful health outcomes. Emmanuel et al (2002) demonstrated that when community is involved in the performance measurement process, there is evidence that outcomes improve. For example, Eckert (2001) describes a successful centre for substance abuse treatment that provides treatment programs for adolescences. They gauge what they were doing by asking the client, social workers, children and their families for feedback on what was most helpful about the services, what was least helpful, and how services could be improved. With the community as partners in assessing programs, they are able to continually improve services through feedback.

(33)

The patient is probably the most significant stakeholder in determining health outcomes and is in the best position to assess quality of health and life (Carins, 1996; Shelbourne et al, 1997). According to Schechter (1997), patients' self-related health status can provide a wide range of meaningful data if the questions are clear. Providers should involve patient as real partners in the care and recovery process and include patients' preference in their outcomes (Woodward 2001). One of the best ways to know how patients are is to ask them (Enns, 2000). Ohio's Mental Health Board found that even patients with severe behavioural health problems could use their survey. Using results of the survey led to superior outcome measures (Woodward, 2001). In addition to providing better measures, it appeared that when providers focused on consumer's own reported quality of life and health needs, patients recognized problems and became engaged in their own treatment and recovery (Woodward, 2001). The World Health Organization's World Health Report

2000 (Shelbourne et al, 1997) findings were that both psychical and mental health were

equally important to patients, and that there is some discordance between what physicians and patients see as important. Shelbourne et al (1997) stress that in the health industry, it is necessary to consider patients preferences for their own health related outcomes. For decades, they argue, health providers have been criticized for making decisions that rely principally on the most advanced treatments, while neglecting patient's input with regards to their own health outcomes. Patients frequently prefer to maintain a certain tolerable quality of life, rather than risk further suffering through the newest treatments. Opinions of patients are therefore crucial in determining outcomes. Subsequently, a medical outcome would not necessarily include the level of cure but more importantly, the outcome would depend whether the patient received the correct treatment at the level

(34)

he/she desires, with the resources available. In other words, patient's preferences are paramount when measuring health outcomes (Shelbourne et al 1997; Woodward, 2001).

Basing outcomes on patients' preferences arise from a number of studies that found that patient's tolerance for symptoms varied widely among patients with similar symptoms. Researchers argue that health outcome guidelines that do not consider patient's preferences are undesirable. Different patients may be essentially in the same physical condition, but have vastly different quality of life, and changes in the quality may be the result of factors other than physical ability. More that 800,000 non-institutionalized people with disabilities that limit their performance in activities of daily living considered themselves to be in good to excellent health. This suggests that they felt healthier than their physical limitation may suggest, which is more representative of their overall health-related quality of life and measures based on activity limitation alone (Schechter 1997; Carins 1996, Shelbourne et al, 1997).

Knowing patients' perceptions are an important part of measuring health outcomes, especially when dealing with individuals who are amongst the poorest in society, and view themselves as such. There is a strong inverse association between socio economic status and measure of health functioning in the general population. Socio economic status clearly influences the risk of both developing and dying from certain diseases, an important factor when judging the performance outcomes of those who provide health services to economically disadvantages patients (Hemmingway, 1997). Apparently, those who view themselves as socially disadvantaged are more likely to have health problems

(35)

and to view themselves as being unhealthier than their wealthier counterparts. Similarly, individuals with HIV/AIDS generally experience decreased levels of health-related quality of life (Lazzarini, 2002). This is significant for clinics with a client base who is largely composed of individuals who are economically disadvantaged and/or HIV positive.

Time Factor and Sample Size

The second category relates to the amount of time needed to measure health outcomes and the numbers of individuals needed for a valid study. Health outcomes usually depend on huge sample sizes over long periods of time. Earlier in this literature review, it was demonstrated that outcomes could often only be observed over long periods of time. Long periods of time are more critical in the health industry than almost any other sector. Although health and well-being are measured, most of the measures are determined by events that took place decades before. The impact of a program on health over a one or two year period is difficult to detect (Enns, 20000; Emmanuel et al 2002).

Leaders in the evaluation field teach that requiring outcome targets before there has been at least one year of baseline outcome data is counterproductive. Programs with no experience in outcome measures generally have no basis for setting appropriate targets, so need time to establish targets (Plantz, 2002) A trial run of outcomes measures is essential to establishing performance measures (ibid). Few measures will be perfect the first time, and even those that are will change as the organization evolves in response to a changing environment. This means that measures need time to develop, and adapt to circumstances (Perrin, 1998).

(36)

Established baselines and benchmarks can assist in determining the impacts of health outcomes, however, if a sample size is too small, and the time too short, outcomes can be misleading. For instance, it is not unusual to employ a test group of 300,000 to 800,000 in one health related case study because of the number of external factors affecting health indicators. The research for the Salk vaccine used 400,000 children, and found the control group had 150 cases of polio compared to 50 cases in the test group. Large samples are necessary for accurate medical outcomes (Milewski, 1996). Established benchmarks are often used to offset some of problems with small sample sizes, but the results may be misleading (Emmanuel et al, 2002).

Value of Medical Procedures: Outputs versus Outcomes

The third category involves placing a high value on the processes involved in medical procedures. In the health industry, measuring the process through outputs can be an indicator of quality of care and are, in many ways, superior to measuring outcomes. This is because they can be readily measured, interpreted, and sensitive to deficiencies in care, provide indicators for action and sometimes the only aspect that can be measured (Crombie, 2001). Evaluation criteria relating to process and structure are used as proxies to actual measures of quality of care and health outcomes. The basic premise is that, if specific structural and procedural criteria were met, then it could be assumed that good quality of care was achieved. This concept is currently used in the discipline of evaluation, especially where actual outcome measures are elusive or methodologically

difficult to capture (Canada: A Discussion Paper, 2002). If used correctly, medical

procedures can directly assess the quality of care being delivered. Compared with questionnaires that are by nature, subjective, and often cumbersome to administer,

(37)

procedures offer more valid and reliable information that is easier to routinely collect. In addition, procedures are easier to interpret in most instances.

Studying outputs, in some instances, can be more effective at detecting poor outcomes than studying outcomes, particularly in instances where outcomes can only be determined after a considerable length of time and in instances where there are problems with validity. Data on process measures are easier to interpret than data on outcome measures and avoid the need for subjective measures, such as patient assessments of symptoms and level of quality of life (Crombie, 2001). Evaluative criteria relating to process and structure can be used as proxies to actual measures, under the premise that if the specific structural and procedural criteria are met, then it can be assumed that the proper care will be achieved (Sawhill, 2001; Legowski, 1999). One example where outputs instead of outcomes are more effective involves inoculations. Vaccination clinics count the number of vaccines administered, and assume that the results follow, based on well-documented medical research. It would not be feasible for each vaccination provider to follow each patient to ascertain whether the treatment was effective. Rather they depend on results of large medical research institutes.

Crombie (2001) suggests that the advantages of process measures are that they directly assess the quality of care being delivered thereby indicating what action is needed. The argument is that, researchers provide evidence for the best ways to ensure desired outcomes. The practitioner has confidence that by providing the indicated treatments for

(38)

the symptom, there is a level of assurance that the outcomes will be achieved. This alleviates the need for every practitioner measuring every outcome for every client.

When measuring outcomes, there is an assumption that that everyone receiving treatments starts at the same level. Medical process can compensate for the fact that differential outcomes are the norm, rather than exception (Lambert, 2001). Outputs can often point directly to actions needed to improve care. Consequentially, combining outcome measures with outputs provide a superior indication of how well health objectives are reached. Given the difficulty with weaknesses in data that is intrinsic to health outcomes, performance can not look at outcomes alone but must also consider process and intermediate outcome measures (Perrin, 1998).

Measuring medical procedures are preferred in instances where outcomes would otherwise be ignored. The evaluative literature repeatedly points to situations where there are deliberate selections of clients or programs most likely to have positive outcomes, while the most difficult clients and programs are ignored. This deliberate selection is known as creaming whereby cases that are more difficult are not only ignored but often dropped in favour of clients or programs that are more likely to result in successful outcome measures (Legowski, 1999). Measuring medical procedures would eliminate the need for creaming and permit all procedures to be included in measurers without fear of poor end results.

(39)

Established Health Outcome Indicators

The fourth category provides information on some of the instruments that can be purchased by those wanting to measure performance in the health sector. Evaluators may save resources if they use surveys and questionnaires that have been developed and tested elsewhere, making modifications where necessary (Hemingway, 1997). There are a number of sources an evaluator can use that are particular to health care depending on the measure sought; five are listed:

The 36-item form of the FS-36 Medical Outcomes Study questionnaire was designed as a generic indicator of health status and health related quality of life for use in population surveys and evaluative studies of health policy and as an outcomes measure for clinic practice and research (Jette, 2002; Hemingway, 1997).

Medical Outcomes Short From, SF-12 is a widely used self-report measure (Shelborurne, 1997).

Ohio's Mental Health Board Survey entitled Ohio Mental Health Outcomes

Survey, Adult Consumers Form is for patients with severe behavioural health

problems (Woodword, 2001)

The Health Utilities Index is a system for measuring health status and health-related quality of life which is available in three versions, HUI Mark 1, 2, and 3 (Health Utilities Index, 2002).

The Canadian Council on Health Services Accreditation is a non-profit organization that provides opportunity for health care providers to participate in

(40)

accreditation programs based on national standards. The council's accreditation program called Achieve Improved Measurement is based on formulation of a set of common performance indicators that can be used to compare data and promote benchmarking with accredited organizations. They provide a guide to development and use of performance measure indicators (Council on Health Services Accreditation Ottawa, 1998).

Regardless of what health indicators are chosen, key to success is ensuring that the indicators are relevant to the program, easily understood by those who need to use them and feasible, given the constraints of the organization and (Hoelzer, 2001).

This section presented lessons learned in fours categories: involve stakeholders, particularly the patient; long periods of time and large sample sizes are normally needed to determine health outcomes; there are advantages to measuring the process as well as the outcomes; and adapt questionnaires developed by experts to suit unique circumstances of the program. Now that advice from those who have led the way in developing health performance measures have been presented, the next section will present tools that may be helpful for organizations attempting to set-up performance measurements.

G

UIDELINES FOR

E

STABLISHING

P

ERFORMANCE

M

EASUREMENTS

This section begins by incorporating practical guidelines presented by the experts in the field and ends with guidelines for establishing performance measurement.

(41)

What constitutes a good performance measurement system? As discussed earlier, performance measurement specialists suggest a number of principles that form the basis when preparing meaningful performance measures. Principles include stakeholder consultation, clear mission statements, and focused results-oriented data collection, simple, easy to understand measures that include medical procedures and an iterative process that permits sufficient time will lead to useful performance measures.

Experts incorporated these principles into a number of step by step guidelines which are available in the literature and can be used by organizations developing performance measures (British Columbia 2002; Fairfax 2002; McAdams 2001). A modified version of the step-by step process developed by Fairfax (2002), Auditor General, (2001) and Kravchuk (1996) has been chosen as a good starting point for developing measures because it has passed the rigors of time and iterations of years of development and has been successfully used in several applications. The process consists of 5 steps: formulate clear mission and goals, define service area objectives, develop explicit measurement strategies, involve stakeholders and identify indicators that measure progress on objectives.

Clear Mission and Goals

Step one reviews and evaluates existing mission and goals to formulate a clear, coherent mission, strategy and objectives. Be clear about what is to be achieved. For the most part, measures will reflect the organization's mission and represent well established goals. Re-examine and clearly define the mission statement using clear, unambiguous language.

(42)

Bryson (1995) presents six questions for setting the stage for developing a clear mission by developing a clear vision of success. "1. Who are we? The question is one of identity, defined as what organizational members believe is distinct, central and enduring about their organization" (ibid, p 76). The answer will distinguish between what the organization is and what it does. "2. In general, what are the basic social or political

needs we exist to meet, or what are the basic social or political problems we exist to address? The answer…provides the basic social justification for…existence. The

organization can then be seen as a means to an end, and not an end in itself" (ibid) "3. In

general, what do we do to recognize, anticipate and respond to these needs or problems?

This question prompts the organization to actively stay in touch with the needs it is supposed to fill or the problems it is supposed to address" (ibid). "4. How should we

respond to our key stakeholders?" (ibid p. 77). Answers focus on the relationship the

organization wishes to establish with key stakeholders. "5. What are our philosophy, value, and culture? …Only values that ate consonant with the philosophy, core values and cultures are likely to succeed" (ibid.). "6. What makes us distinctive or unique?" (ibid). The answer may prove to be invaluable for non-profits who need to prove their uniqueness while endeavouring to find funding in an increasingly competitive environment

Once the mission has been clearly defined and articulated to the satisfaction of all stakeholders, goals and measures must be tailored to coincide with the mission. One study suggests that the best goals are those that " (1) set the bar high, but not too high, (2) help focus the organization on high level strategies, (3) mobilize staff and stakeholders,

(43)

and (4) serve multiple purposes, such as setting the larger public agenda about a certain issue" (Sawhill, 2001, p. 383). Fairfax (2002) provides a useful template for writing or validating a goal statement.

Goal Statement

To provide/produce (service or product) to (customer)

in order to (statement of accomplishment).

A key outcome indicator should be identified that enables measurement of the extent to which a goal has been achieved.

An illustration of how an organization fits into the above template is as follows:

The Adult Day Care Centre

To provide a comprehensive day program designed to assist adults with disability to remain in the community,

in order to obtain a maximum level of health, to prevent or delay further disability

(Fairfax 2001).

Define Objectives

Step two defines service area objectives in terms of outcome-based statements that specify what will be accomplished within the fiscal year then develop an explicit measurement strategy. Each service area will have at least one objective statement and at least one indicator for output and one for outcomes. The objectives should clearly demonstrate progress toward the organization's goal. In general, objectives should: reflect

(44)

benefits to clients, be written to allow measurement of progress; be quantifiable within a time frame; describe a quantifiable future target level; and identify the concrete, measurement targets that will, over time, lead to the intended results, which are the performance targets (Fairfax, 2001).

The following template can be used for writing an objective statement.

Objective Statement

To improve/reduce (accomplishment)

by (a number or percent) from X to Y, [toward a target of (a number)]

to be used as appropriate.

One example of service area objectives in the Child Health Services Department is: To reduce the overall incidence of low birth weight for clients

by .2%, from 5.4% to 5.2% for at-risk mothers,

toward a target of 5%

which is the minimal level to avoid health risks in newborns (Fairfax, 2002).

Develop Measures

Step three develops an explicit measurement strategy, which includes establishing baselines and benchmarks for inputs, outputs and outcomes as well as determining the process of collecting and storing data. To evaluate success, first determine a measurable

(45)

baseline, implement the program, then re-measure the baseline. The difference between the before and after is considered the output or outcome of the program, in other words, health outcomes are the change in health status attributable to an intervention (Legowski, 1999).

Benchmarks indicate the level of success of the outputs and outcomes, compared to the standards established. Benchmarks can be developed in a number or ways, for example, the program provider may set standards of acceptance, the industry may have already established levels or similar programs results may be used as comparables.

Knowing what data to collect and the best way to collect it is a process that takes time to develop. According to Emanuel (2002) an iterative process in which relatively easy data relating to the most obvious outputs and outcomes are first collected builds confidence and credibility through early success. Once a foundation of successful data collection and output and outcome measures is established, the more difficult collection techniques and harder to measure outputs and outcomes can be undertaken.

Plantz (2002) suggests that special strategies are needed for outcomes that are hard to measure, including those involved in public education, emergency services and prevention. She suggests that creative ideas such as using existing records, third party reports, secondary data and approximating of outcomes.

(46)

Step three also involves using a logic model to lay out the design of a program in a picture format. The major elements of measurement can be depicted in a simple, visually comprehensive manner that helps clarify the process, without getting caught in detail McAdams, 2001). McAdams (ibid) further explains that the logic model is useful in showing the sequence of program measures, which he describes as if/then statements. To understand if/then statements, McAdams cites an example. If we provide counselling, then, men will learn that violence is unaccepted, and change their attitude. If men change their attitude, then they will discontinue their violent behaviour. If men no longer enter in violent behaviours, then their partners will be safer. By sorting the intended outcomes of the program into a series of if/then statements, the primary outcomes are more easily identified and assessed (McAdams, 2001; McDavid, 2002). Figure II is a sample of a logic model that illustrates the uses of if/then statements by laying out the design of a program.

(47)

FIGURE II

LOGIC MODEL ILLUSTRATING IF/THEN STATEMENTS IN THE HEALTH SECTOR

The Problem: Too many street involved people use hospital emergency wards and hospital beds instead of the doctor's office

(Adapted from McAdams, 2001; McDavid, 2002)

Stakeholder Consultation

Step 4 involves stakeholders, particularly clients in the design and development. The best way to know the effects of a program on patients is to ask (Sherbourne, 1997; Enns, 2000). There are a number of survey and questionnaire instruments available through the internet that can be used to assess medical outcomes of patients (Hopman, 2000). Notwithstanding the possibility of heuristic and other biases inherent to all question-type

People who have other options do not use hospital emergency wards and patient rooms

Activity: CHC provides treatment at locations convenient to patients

Output: Numbers of patients treated

Outcomes: (short-term)

Patients use CHC outreach clinics

Outcomes (long term)

Patients use the emergency wards less

Activity: Addiction counsellors set-up treatments plans for addicts

Outcomes: (long term) Patients over dose less often, so have fewer visits to emergency ward

Output: Number of patients receiving treatment

Outcomes: (short-term)

(48)

data collection, asking direct questions has been shown to provide data crucial to sound performance measures (Schecther, 1997).

Woodward (2001) explains that patients, providers and funders are the primary stakeholders in healthcare, and given their individual roles, points of view and experiences, they must be involved for performance measures to be effective. Task forces, committees and other groups involved in performance measures need to involve these stakeholders in addition to other members of the community for best results. Asking clients and stakeholders what was most helpful and least helpful about services provided and what could be done to improve the services is vital to continually improving services (Eckert, 2001).

Measures and Objectives

Step 5 identifies indicators that measure progress of objectives. To do this, multiple sets of measurers for multiple users can be developed. Indicators are the first-level data for reporting performance and ideally, at least one output, and outcome indicator should be developed for each service objective. Indicators developed should measure whether objectives are being met. Gathering evidence from performance measures entails developing procedures that can be used to collect information. Compare progress against the target and against the intended results that will lead to the actual rather that the intended performance. Existing benchmarks and comparative information available should be used. (Auditor General, 2001; McAdams 2001) For example, when measuring efficiency, in a health clinic, inputs such as dollars spent on the program's activities need

(49)

to be assessed in terms of outputs that include number of patients treated, number of persons screened and average worker-hours per client (Rutgers, 2002). Outcomes such as, mortality rates and morbidity rates need to be assessed. Efficiency considers such things as cost of medical supplies per unit and program costs per number of patients.

Portions of the above guidelines have been used to successfully establish performance measures for several types of applications. They are recommended as building blocks for organizations attempting to establish performance measures (British Columbia, 2002; Fairfax 2002; McAdams, 2001).

Referenties

GERELATEERDE DOCUMENTEN

Abstract— We consider almost regulated output synchro- nization for heterogeneous directed networks with external disturbances where agents are non-introspective (i.e. agents have

Addressing the security measures taken by several Member States and the debate about internal borders and migration in Europe, the EC published a roadmap 'Back to

Following the exploration of perceptual affordances of encountering an embodied self on screen, Chapter 3 pursues to account for the phenomenon of filmic self-enactment by

Chemical analysis with ToF-SIMS on the microscale of FeCr steel revealed that the oxide thickness is dependent on the orientation of the bulk grain at the surface.. Two

In this paper we estimate the effect of the expansionary monetary policy stance of the Fed before the global financial crisis of 2007-2008 on banks‟ lending standards, and we

For the shallow water equations with topography we showed numerical results of seven test cases calculated using the space- and/or space-time DGFEM discretizations we developed

With the increase in popularity of CQA websites, not only the number of questions and the number of new members increased, but also the number of unanswered questions be- came high.

In conclusion: the Digital Monument and Community appear to be valuable contributions to commemoration practices of the Shoah, a place accessible 24/7 for commemoration all over