• No results found

Improving the quality of medical reports with gamification

N/A
N/A
Protected

Academic year: 2021

Share "Improving the quality of medical reports with gamification"

Copied!
16
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Final version: August 11

th

, 2014

Supervisor: Dr. Frank Nack

Signature:

Improving the quality of medical reports with gamification

Lily Martinez Ugaz

Student number: 6114555

Thesis Master Information Science: Human Centered Multimedia

University of Amsterdam Faculty of Science

(2)

2

Improving the quality of medical reports with gamification

Lily Martinez Ugaz

Dept. Information Studies, University of Amsterdam Amsterdam, The Netherlands

Lily.martinezugaz@gmail.com

ABSTRACT

Medical reporting is one of the basic professional responsi-bilities of a health professional, yet the importance is often not reflected in practice. Poor medical documentation can lead to mismanagement of patients, to address this we pro-pose using gamification in order to improve the quality of medical reports. By using gamification we aim to engage and motivate health professionals to improve the quality in their medical reports. We designed game elements and implemented them into an existing electronic health record. Participants could rate each other’s medical report and by doing so, they also gave feedback to their colleagues. Overall, the participants mentioned that they would like to keep using these elements but would like to see more in-volvement of the whole department. At this point, no quali-ty improvement has been observed by the participants. We recommend repetition of a similar experiment with a whole department and further investigation to improve quality in medical reporting using gamification.

Categories and Subject Descriptors

H.5.m [Information Interfaces and Presentation (e.g., HCI)]: Miscellaneous; J.3. [Life and Medical Sciences]: Medical information systems; J.4 [Social and Behavioural Sciences]: Psychology, Sociology

General Terms

Design, Experimentation, Performance, Human Factors

Keywords

Gamification, medical reporting, user engagement, game-based motivation mechanism

INTRODUCTION

Medical documentation is a fundamental part to ensure good medical practice (Pullen, 2006). Medical documenta-tion is anything written about a patient that describes the status, the care or services given to a patient. This docu-mentation can be either handwritten or electronically gen-erated (Documentation Guidelines for Registered Nurses, 2012)(Mishra, 2009). Medical documentation is required for the provision of patient care. All medical documenta-tion is kept in an electronic health record (EHR) or a paper based health record (Harman, Flite, & Bond, 2012). Both have in fact the same basic idea: “to support healthcare and to maintain, respectively improve its quality’ (Hoerbst & Ammenwerth, 2010, p. 1).

An EHR is a systematized way of storing healthcare data, information and other relevant documents related to a

pa-tient’s health, to retrieve the required data when needed (Mishra et al., 2009). In order to keep track of cross-institutional patients, each patient has his own EHR. A patient’s EHR consists of medical documentation for healthcare, research and administrative purposes (Harman et al., 2012). The medical documentation for the patient’s health consists of multiple medical reports, usually written by different health professionals (Harman et al., 2012; Hoerbst & Ammenwerth, 2010). The purpose of a medical report is to transmit healthcare data, to identify problems and to create an overview of the patient’s visit. Thus, medi-cal reporting is the systematic capturing and transmitting of healthcare data obtained from observations or findings (Verpleegkundigen & Verzorgenden Nederland, 2011) Medical reports assist in the management of a patient. (Thomas, 2009), but has many purposes other than the provision of care. It can assist the author in their thoughts, aid in identifying problems, serve as a communication tool with other health professionals and provide legal protection against possible allegations of medical malpractice or neg-ligence (Documentation Guidelines for Registered Nurses, 2012). In a situation where medical malpractice is alleged, the medical documentation is the main evidence in the legal system (Thomas, 2009).

A patient can be treated and supervised better through good medical documentation, avoiding repeatedly asking the same to a patient or performing an action which has already been done. A medical report also gives a good picture of the development of a patient over a longer period. In other words, good reporting is important for the continuity, coor-dination and quality of care (Case Di Leonardi, 2009; Douglas-Moore, Lewis, & Patrick, 2014; Harman et al., 2012; Stewart, 2007).

Good medical documentation can be achieved by properly reporting for the patient under the health professional’s care (Thomas, 2009). Medical reporting is one of the most basic professional responsibilities, yet the importance is often not reflected in practice. Maintaining good standards of report-ing is problematic in spite of years of trainreport-ing (Dehghan, Dehghan, Sheikhrabori, Sadeghi, & Jalalian, 2013). The reasons for poor documentation mentioned by health pro-fessionals are lack of time and a reliance on verbal commu-nication (Stewart, 2007).

Poor quality medical documentation may be common, because medical reports are generally given a low priority (Abdelrahman & Abdelmageed, 2014; Pullen, 2006). How-ever, poor medical reporting can lead to mismanagement of treatments. Unavailable or incomplete patient information

(3)

3 and illegible handwriting can lead to diagnosing and

order-ing errors (Edwards & Moczygemba, 2004). The conse-quences of poor medical documentation lead to negative outcomes for the patient. For example, a nurse gives medi-cation to a patient her shift ends and leaves without report-ing that the patient received his medication. Another nurse reads in the patient’s health record and observes that the patient has not been given his daily medication and gives him his medication. The patient has been given a double dosage of what he has been prescribed and could have a seizure or even die because of overdose (Stewart, 2007). In order to maintain good medical reports, health profes-sionals need to be audited from time to time (Ivers et al., 2012). Audits and information feedback are proven to be effective in improving professional practice (Ivers et al., 2012; van der Veer, de Keizer, Ravelli, Tenkink, & Jager, 2010). Information feedback is a way to give insight to health professionals on their performance and motivate change. This could be presented in a form of a yearly paper report or a website where health professionals can have access to, to compare their own results to their colleagues or to a national average (van der Veer et al., 2010). The combination of audit and feedback is when a health sional’s performance is measured, then compared to profes-sional standards (Ivers et al., 2012) and is given feedback in order to improve their performance to follow the profes-sional standard (van der Veer et al., 2010).

Feedback may be more effective when there is a low base-line performance, it is given more than once, it is delivered both verbally and in writing, an auditor which is a supervi-sor or colleague and includes explicit targets and an action plan (Hurst, 2013; Ivers et al., 2012). However, there is also opposing evidence regarding peer-comparison in feed-back (Ivers et al., 2012). There are several ways of provid-ing information feedback, and should be adapted to local circumstances per health facility. It remains unclear which information feedback strategy works best (van der Veer et al., 2010).

Information feedback can motivate change in health profes-sionals (van der Veer et al., 2010), it is to research a way which can be engaging, motivating and giving feedback to health professionals. An approach where motivation and engagement can be accomplished is gamification (Deterding, Dixon, Khaled, & Nacke, 2011; Hamari, Koivisto, & Sarsa, 2014). Gamification is an example of engaging users by letting them participate for a purpose other than entertainment. Gamification is defined as “the use of game design elements in non-game contexts” (Deterding et al., 2011, p. 2). This could be using extrinsic rewards, such as virtual points, badges and achievements in order to stimulate positive patterns. By aggregating game design elements into a service ‘positive, intrinsically moti-vating, gameful experiences’ (Hamari et al., 2014, p. 3025) can be achieved and therefore engaging the user to do a task (Hamari et al., 2014). The ideal outcome of gamifica-tion is when extrinsic rewards turn into intrinsic

motiva-tion. The underlying objectives wished to be accomplished with gamification should fit into someone’s existing habits and they should be rewarded for that behavior (Heyman, 2014).

Motivation can be caused by couples of opposites, for ex-ample social acceptance and rejection (Muntean, 2011). Social and professional norms are considered important predictors of behaviour change (Ivers et al., 2012). If someone has no motivation to solve a problem, he will not do it even though he is able to do it. For example, if a stu-dent’s social reputation is at stake, he will be either posi-tively or negaposi-tively motivated to solve the problem in order to prevent being socially rejected (Muntean, 2011).

Game elements like badges, challenges, leaderboards, points and quests are proven to engage and motivate users in desired behaviour (Hamari et al., 2014). Also, gamifica-tion assists in speeding up positive feedback loops between co-workers and their manager (Senapati, 2013). Hamari et al. (2014) did a literature review on which elements were tested motivational affordances. They found a large variety of elements tested in empirical studies, but badges, points and leaderboards were the most common.

For our study we will use points (stars), leaderboards and feedback to examine whether these elements could moti-vate health professionals to improve the quality of medical reporting. The investigation of the implementation of these game elements in an EHR is done in collaboration with PinkRoccade Healthcare.

PinkRoccade Healthcare is a Dutch company, which have an EHR. They have many customers using their EHR and one health center located in Tilburg (The Netherlands) was willing to participate in this study.

Research question

We believe that gamification could help in improving quality in medical reports and will investigate this matter in this paper. To conduct this study, the following research question will be answered:

To what extent is there improvement in the quality of med-ical reporting with gamification?

To answer the research question, we will focus on the use and writing of medical reports, the quality of medical re-ports and the providing or receiving feedback among health professionals about their medical reports.

Structure of this paper

In the ‘related work’, examples are presented of game de-sign elements in non-game applications. Approaches of improving medical documentation and an example of gami-fication used for incident reporting. The ‘research design’ section is where the game design elements are explained and how the experiment is conducted. In ‘experiment re-sults and analysis’ the outcome of the experiment is given. Finalizing with the sections ‘discussion’ and ‘conclusion’ of this paper.

(4)

4

RELATED WORK

This section includes works of gamification, where the mobile apps Foursquare and NikeFuel are used to exempli-fy the use of game design elements in apps where its prima-ry goal is not entertainment. Also, we discuss an approach to improve medical documentation and give an example of gamification in an incident reporting app, named Ubiloop.

Gamification

Foursquare is an example, where game design elements are used in non-game contexts to motivate users and increase user activity (Deterding et al., 2011). Foursquare is a free mobile app that allows users to check in at places they visit, tell their friends where they are and track friends where they have been. The app can be described as “a friend-finder, a social city guide and a game that challenges users to experience new things, and rewards them for doing so” (Lindqvist, Cranshaw, Wiese, Hong, & Zimmerman, 2011). Users can receive virtual and tangible rewards for check-ins. Virtual rewards are; badges, points and mayorships which are visible in the user’s public profile. The rewards are not received at every check-in, but under certain cir-cumstances. A mayorship is assigned to one user, who has the most check-ins at a given place for the past 60 days. Users with mayorships are sometimes awarded discounts for that given place. Lindqvist (2011) investigated how and why people use Foursquare and discusses the privacy con-cerns people have and how they cope with it. Examples of motivation the participants mention are earning badges, points and mayorships (virtual rewards). The discounts (tangible rewards) were equally important as the points and badges for the participants. Others, mentioned fun or “something to do” when they are bored. There are location-specific badges which stimulated participants to go to plac-es they have never been to. The participants confirmed that these types of badges motivated them to go to new places, in order to earn badges (Lindqvist et al., 2011).

Nike uses game design elements for the endurance of sports to influence users’ motivation, productivity and behavior. This is done with NikeFuel, a mobile app where people can measure their movements for different physical activities varying from workouts to going out dancing in clubs (Blohm & Leimeister, 2013). NikeFuel users gain points by any kind of movement, which is tracked with their smartphone or Nike+ products. Results of each user are visualized in the Nike+ platform and converted into Ni-keFuel. Personal results and achievements are visible for the Nike+ community, where users can compare their per-formance with other users. The main goal is to compete with friends or other similar Nike+ users to get more points. The app aims to motivate users by displaying a NikeFuel bar and letting them know that they have to move more to unlock certain awards, trophies and surprises. This way, extrinsic rewards could serve as intrinsic rewards to award the user’s behavior of moving. Users can see their progress and gain more motivation in staying fit or becom-ing more fit (Blohm & Leimeister, 2013).

Improving medical documentation

Other than audit and feedback approaches, there are other methods used to improve the quality of medical documen-tation, such as clinical governance.

Clinical governance is a system to maintain and improve the standard of clinical practice and to improve patient care within a health system. The clinical governance program exists of the following elements: education, clinical audit, clinical effectiveness, risk management, research and de-velopment and openness. Its goal is to continuously im-prove the quality of care (Starey, 2001). Dehghan et al. (2013) did a study on whether clinical governance im-proves the quality of nursing documentation. They provid-ed training session to address nursing documentation and to explain the clinical governance program and to teach healthcare providers what can be achieved with the clinical governance elements. The training sessions were voluntary to join, where more than 85% of 400 nurses participated to at least one of the training sessions. Dehghan et al. (2013) assessed the documentation pre-implementation and post-implementation with a checklist including structure and content to determine the effect of changes in quality im-provement. They found no significant difference in struc-ture or content after a 2-year clinical governance program. The quality scores did not improve and more attempts are needed. The authors recommend further research on as-sessing how clinical governance can improve quality (Dehghan et al., 2013).

Reporting with gamification

Bach, Winckler, Gatellier and Bernhaupt (2012) use gami-fication in a mobile application named Ubiloop for incident reporting. Incident reporting is where specialized users provide detailed information about problems, which is used for crisis management. They investigate the use of mobile technology for citizens to report urban incidents in their neighborhood. Bach et al. (2012) mention that users find incident reporting ‘a dull and boring activity’ (p. 25) and therefore added playful aspects into their application. They aim for a playful User Experience (UX) design, to reward a real-world task in order to encourage citizens to report incidents. The tasks that are rewarded with points are; number and frequency of reports. They reward for the fre-quency of reports to motivate users to provide feedback. Also, the level of details is rewarded. For example, when adding a photo or providing the address of the incident. The authors believe by making a real-world task into a social game would make users gain social awareness and be en-couraged to compete with each other to check or see the incidents for themselves. Also, these elements could serve to show the consequences of incidents in real-life and teach the users laws and regulations associate to specific inci-dents. Furthermore, to show what the penalties are of caus-ing such incidents. The effectiveness of gamification in Ubiloop has not been proven at this point. Gamification was not the primary goal, but the authors believe adding game elements could stimulate citizens to report incidents.

(5)

5 However, they are aware of the challenges of gamification

in Ubiloop. For example, users get rewarded for reporting incidents and more incidents might be reported in order to receive rewards. This could perceive negatively towards the city being a city with many ‘problems’ (Bach, Winckler, Gatellier, & Bernhaupt, 2012).

RESEARCH DESIGN

First, we will investigate how medical reports are written and used, whether health professionals receive or give feedback and how the quality is of medical reports in gen-eral. This will be done with the general questionnaire and will be send to different health professionals in The Nether-lands. Before we can investigate this, we need to assess what makes a ‘good’ medical report. This will be done in a literature research.

The second step is to search where the game elements can be used in the EHR. Based on the literature review of Ha-mari et al. (2014), we choose two common game elements which motivational affordances were tested: points and leaderboards. We will have a star rating system from 1 to 5 (points), where users can give each other stars based on their perception on how good a medical report is written. Different types of leaderboards, to motivate users to be the best and to view who is rated as the best. We choose feed-back as third element, because of the research of Ivers et al. (2013) and Van der Veer et al. (2010) which stated the importance of feedback but the way of providing feedback should be adapted to local circumstances. Health profes-sionals will be able to provide feedback to their colleagues by rating medical reports with stars and five additional options based on our literature research on requirements for a good medical report. Each user will receive personalized feedback and will only be presented to the user in the EHR. These game elements will be incorporated in the EHR. The EHR with the game elements will be tested in a group of health professionals in one health center. Before testing, it is required for the participants to fill out questionnaire 1 to get insight on the current state on how they write and use medical reports. This questionnaire is an extended version of the general questionnaire with more specific questions on the topics use and writing of medical reports, the quality

of reports and the giving or receiving feedback.After the

testing period, they are required to fill out questionnaire 2 to measure whether these new features are of added value and to compare the data of the first questionnaire. All three questionnaires have Likert-type, multiple choice and open questions.

In the following sub sections, each step of the research will be described.

The existing electronic client record

First, we had to investigate the possibilities of medical reporting in the EHR. Checklists or lists and free-text re-ports are considered as medical rere-ports. Free-text rere-ports are used frequently and have no structure as the (check)

lists. After analyzing several free-text reports of different healthcare professionals, it was decided to merely focus on the improvement of free-text reports, because there is com-plete freedom of writing. In order to measure quality and improvement of quality, there is a need to investigate what makes a good medical report.

Good medical reports

The requirements a medical report should consist of are found in medical guidelines or papers. In total we used 8 references to make a list of requirements (appendix A) The requirements which occurred in 4 out of the 8 refer-ences are legible (7), clear (6), objective (4), contemporary (4), accurate (4) and lack of derogatory comments (4).

Legible meaning that healthcare professionals need to write

documents with a readable handwriting. There should not be confusion about the written documentation, when read in a later stadium. Other healthcare professionals should not have to ask the author what is stated in the document because of the handwriting (Douglas-Moore et al., 2014; Verpleegkundigen & Verzorgenden Nederland, 2011). A medical record should be clear when read by other healthcare professionals. Unknown terminology should be avoided in order to prevent ambiguity. A clear medical record is also achieved using correct language, aiming to have little to none spelling errors. This is all to avoid misin-terpretation by the healthcare professionals (“Rapporteren in de Zorg,” 2014).

Objective is covered when a healthcare professional writes

all the findings of a specific patient without his personal opinion on the matter. This also includes personal work notes, mnemonics, suspicions or questions. However, per-sonal notes which are of interest of the patient should be recorded in the patient’s record. It has to be clear that these are not findings (Verpleegkundigen & Verzorgenden Nederland, 2011).

A record should be written contemporary during the pa-tient’s visit or as soon as possible after seeing the patient. This is to avoid forgetting details that might be of influence of the patient’s care (Good Medical Practice Australia, 2014, “Rapporteren in de Zorg,” 2014).

A record is accurate when the information in the record is detailed and precise. The record should not be pages long, but enough to cover the important findings with specificity (Guidelines for Medical Record and Clinical Documentation, 2007).

The lack of derogatory comments or remarks is to ensure that medical records show respect towards the patient (Good Medical Practice Australia, 2014, “Rapporteren in de Zorg,” 2014). Comments which are likely to embarrass, humiliate or anger a patient are inappropriate to write in a medical record. Such notes may also concern colleagues, when the author for example writes that his colleague ar-rived late at 10.15 am instead of mentioning that he arar-rived

(6)

6 at 10.15 am. (“Good Clinical Documentation - Its

Imporance from legal perspective,” 2013).

Based on the requirements found in literature, the features for the EHR were created as were the general questionnaire and questionnaire 1. The terms which were found in litera-ture were an incentive to ask healthcare professionals what they believe makes a good medical report. This is asked in the general questionnaire as well in questionnaire 1. This way, a comparison can be made between the literature and the opinions of the healthcare professionals.

General questionnaire

To get general insight on reporting in the Netherlands, we want to assess the current state on how free-text reports are written and how healthcare professionals use them. This is done with an online questionnaire, consisting of 17 ques-tions. There are 5 preliminary questions about age, gender and demographics followed by the remaining 12 questions. These questions consist of 10 multiple choice and 2 open questions (appendix B). The topics covered in the ques-tionnaire are writing and use reports, quality and feedback.

Writing reports are questions related to the writing of

re-ports of the participant in question. Use is about the use of reports of the colleagues of the participant. Quality covers the participant’s perception of the quality of reports of his colleagues, how the quality of reports is monitored and if they have suggestions to ensure and improve the quality of reports. The last topic ‘feedback’ contains questions about whether the participant receives or gives feedback to their colleagues about their written reports.

All questions were mandatory to answer, except for the last question where there is asked if the participant has sugges-tions to ensure and improve the quality of reports.

The questionnaire was sent out by mail to the general e-mail address of the several hospitals throughout The Neth-erlands. The request was to distribute the questionnaire to each healthcare professional that uses free-text reports. The questionnaire was also send to 29 general practitioners (GPs) spread throughout The Netherlands. A GP serves as a gatekeeper of the Dutch healthcare system. People who are in need of medical care will first encounter their GP (Schäfer et al., 2010). It is expected that GPs, will have many experience on writing free-text reports and are there-fore considered valuable subjects for this questionnaire.

Design of the game elements

In gamification there is game thinking and game elements in a non-game context (e.g. Nike+, Foursquare), these ele-ments exists as added features for existing applications and websites (Marczewski, 2013).

Since the EHR is an existing application for the use of medical documentation, the choice of using gamification was obvious. The EHR will have some new added features,

where users can evaluate reports by clicking on a button “Rate this report”. There will also be another added button “Leaderboards”. The terminology is written in Dutch, since it was designed for a Dutch health center, but is translated in English for this paper.

Every healthcare professional has their personal log-in in the EHR. They can see reports written by their colleagues only if they are authorized to read them. This means that the medical reports displayed on their screen differs for each health professional.

Every report has a ‘Rate this report’ button. The function is optional for the reader to rate the report or not. Health pro-fessionals will have the opportunity to choose whether to rate a certain report or not. After clicking the button, the pup is displayed on the screen with 6 mandatory op-tions. First is asked for their opinion, where they can pro-vide feedback for the particular report they have chosen. The options are objective, derogatory comments, struc-tured, lack of detail and clear. For each option the user needs to answer yes or no according to their perception of the particular report. After filling in the options, the user can give a 1 to 5 star rating based on their perception of the report.

The chosen terms are selected according to the occurrence in literature. The requirements with the high occurrence were first evaluated if they were relevant for the applica-tion. Legible and contemporary were not applicable, be-cause the application is an EHR and all documentation is done electronically. This means the problems that might occur of unreadable handwriting will not be the case in this situation, also there is a timestamp added when writing recording which means that it is traceable whether the re-port was written contemporary. Therefore, we choose not to have the options legible and contemporary. We kept objec-tive, clear, lack of derogatory comments as mentioned in the literature. The requirement accurate, we changed into ‘lack of detail’ to have a negative option for the user to choose. In addition to the 4 options we have, we choose the requirement ‘structured’. This term occurred in two refer-ences, and we believe is applicable for this EHR because while reviewing several medical reports there were some with structure and others without.

The user has 3 positive options (objective, structured, clear) and 2 negative options (derogatory comments, lack of de-tail) to select ‘yes’ or ‘no’ to. This was chosen, to make the user aware, in order to prevent them from filling in only ‘yes’ or ‘no’.

The rating method aims to stimulate users to rate reports of their colleagues, so that they can give each other anony-mous feedback to improve their medical reports. The user can only rate medical reports of colleagues. The person who rates a report sees who the author is, but the author will not know who rated him.

On the leaderboard page, three types of leaderboards can be viewed. These three leaderboards show the users who are

(7)

7 rated the best, the best rated reports and the best rated

re-port types. Each leaderboard displays a top 5 of their topic. Each user has two personal pages where they can see their personal scores and reports. The ‘personal score’ page is unique for each user. The feedback they receive is visual-ized in two bar charts. The first bar chart shows what is going well and the second bar chart show what can be im-proved. These visualizations are based on their overall average of the reports which have been rated. Also, the average star rating received over all the rated reports is displayed in this screen.

The second personal page is ‘your reports’, on this page users can review their own reports. All reports which are written by the user are displayed per week and shows whether they have been rated by others. If a report has been rated, they see the average star rating, average feedback and the amount of times that specific report has been rated. In summary, the changes made in the EHR are:

 Two new buttons ‘Rate this report and ‘Leader-boards’.

 One pop-up appearing after clicking the ‘Rate this report’ button, where options are presented to the user to rate a medical report.

 One pop-up appearing after clicking the ‘Leader-boards’ button, where three tabs are available to navigate through the screens.

 Three screens appear after selecting the tabs in the ‘Leaderboards’ pop-up, where leaderboards, per-sonal scores and the user’s own written reports with feedback are displayed.

Questionnaire 1

This questionnaire will be send out to the participants be-fore using the EHR with the game elements. This question-naire is an extended version of the general questionquestion-naire. It has more specific questions about the same topics; writing and use reports, quality and feedback. In addition, this questionnaire has a topic ‘personal thoughts’, where there is specifically asked for the participant’s thoughts on cer-tain issues. The goal is to get insight on reporting in the participating health center. We want to assess their current state on how they write and use free-text reports. This online questionnaire consists of 28 questions. There are 4 preliminary questions about age, gender, department and job description. The remaining 24 questions are divided into 11 multiple choice and 13 open questions (appendix C). The topics covered in the questionnaire are writing and use of reports, personal thoughts, quality and feedback.

Questionnaire 2

Questionnaire 2 serves as an evaluation after using the EHR with the game elements (appendix D). In this ques-tionnaire, we aim to investigate the perceived usefulness (PU) and perceived ease of use (PEOU) according to the

Technology Acceptance Model (TAM) (Davis, 1989). Also, questionnaire 2 is developed to compare the results of questionnaire 1.

The goal of this questionnaire is to assess whether the par-ticipants found the added features useful (PU), the new application was easy to use (PEOU) and how the perceived quality of reports is changed.

Experiment protocol and setting

We expect that the quality of free-text medical reports will improve by using game elements in the EHR.

We will use the general questionnaire to get general insight about medical reporting in the Netherlands.

We want to measure if the perceived quality of medical reports has changed with gamification. To compare the outcomes, we present each participant with two question-naires and the application with the added game elements. For the experiment, three departments from the participat-ing health center were interested to participate in this ex-periment. The experiment took place in July 2014 (table 1).

Table 1 – Time span of each activity

Activity Time span

Questionnaire 1 July 1st - July 8th of 2014 Experiment July 7th - July 20th of 2014 Questionnaire 2 July 21st - July 30th of 2014

Questionnaire 1 was given 8 days for participants to fill out the questionnaire before using the application with the game elements. The experiment started July 7th, while

ques-tionnaire 1 was still open to fill out. This was done excep-tionally for one department, who started the experiment at July 9th. The other two departments started at July 7th, and

were allowed to fill out questionnaire 1 until July 7th before

using the EHR with the new features. The experiment, the ‘rating of reports’ was free to use for every health profes-sional, even if they did not fill out questionnaire 1. From July 21st till July 31st the participants from questionnaire 1 were asked to fill out questionnaire 2.

EXPERIMENT RESULTS AND ANALYSIS

The results of the general questionnaire, questionnaire 1 and questionnaire 2 will be shown in this section. The complete analyses of the Likert-type questions from the questionnaires are in appendices b, c en d.

Results of the general questionnaire

Participants

In total 29 subjects (22 females and 7 males) participated in the general questionnaire, which consisted of 10 nurses, 6 GPs, 5 dieticians, 2 internists and 6 other physicians with different specialisms. They were from the age of 26 to 62, with an average of 44.7 years.

(8)

8

Writing and use of reports

All the 29 health professionals write reports daily, the quantity of reports is diverse, ranging from less than five to more than twenty a day. 90% write reports in an electronic health record. 48% uses reports of colleagues on a daily basis. The participants find reports of their colleagues use-ful when they need to write a follow-up report of the same patient. They consider the following requirements for a good medical report: clear, short, concise, complete and relevant. The order of their requirements is based on the occurrence of the words. Clear and short were mentioned by 14 participants as important, concise by 7, complete and relevant by 5 participants.

Quality of reports

The average score they gave to the quality of the reports of their colleagues is 3.6 out of 5, where 5 is perceived as high quality in reports. They also mention that even though the reports are overall of high quality, it sometimes can be the opposite depending from which department or health pro-fessional they receive reports. A question about the moni-toring of quality medical reporting was asked, were 42% does not know whether this is monitored, 10.5% says that the quality is never monitored. 31% state that there are audits from time to time.

Feedback

The participants were asked if they give or receive feed-back about their written reports. In total, 24.1% never re-ceive feedback, 37.9% rarely, 34.5% monthly and 3.5% said they receive weekly feedback on their reports. About given feedback to colleagues about their reports, 13.8% never give feedback, 41.4% rarely, 34.5% monthly and 10.3% give feedback weekly.

Results of questionnaire 1

Questionnaire 1 was sent out to three departments in a health center to every healthcare professional. The condi-tions were to fill out questionnaires 1 and 2, and participa-tion in the experiment.

Participants

The total participants of 31 (28 females and 3 males) con-sisted of 12 care takers, 10 care coordinators, 7 nurses and 2 Heads of department. The age ranged from 21 to 58, with an average age of 38.

Writing and use of reports

The majority (55%) writes 5 to 10 reports a day on average. Based from the questionnaire, 58% (18 of 31) said that they take 5 minutes or less to write a medical report. 51.6% said that they use medical reports of their colleagues on a daily basis. They mention that at the beginning of their shifts they have to read the newest medical reports. This is to be up to date of what has happened since their last shift.

Personal thoughts

The participants consider a medical report good when it is; clear (15), concrete (12), objective (10), short (7) and

con-cise (6). All participants mentioned at least three require-ments for a good medical report. In response to a question about how they think they could report better, they named three possible improvements. Firstly, 12 participants men-tioned if they had more time to write reports they could be more aware when writing and they would have the oppor-tunity to review their written report. Secondly, 10 partici-pants mentioned that they should report more according to their work plan. Also, 9 participants mentioned that it would be helpful if there would be some sort of assistance while reporting. In their opinion, this could be a spell checker, a guideline or feedback which they could access during writing.

Quality of reports

39% mention they do not know how the quality of the re-ports is supervised. Other participants (39%) mention this is audited at some point. The average perceived quality of reports is 3.3 out of 5. The average usefulness of the reports is 3.7 out of 5. The quality of the reports is influenced by the lack of time they have to write a report, they often have to perform other tasks or see other patients (high work pressure). This sometimes results in incomplete reports with unclear sentences, unnecessary information and spelling errors. The participants mention that this could be improved with instructions, feedback or a spell checker to become more aware and by taking their time when writing medical reports.

Feedback

58.1% participants mention that they do not receive feed-back of their written reports (average of 2.5 on a 5 point scale). Others mention that they sometimes receive back, but not on a regular basis. They provided more feed-back than they received (average of 2.9 on a 5 point scale). Both receiving and providing feedback is mostly done verbally face to face or occasionally by e-mail. The partici-pants feel pleased to receive feedback and wish they re-ceive it more often. They find it stimulated to rere-ceive feed-back, because after that they are more aware while writing reports.

Results of Questionnaire 2

For questionnaire 1 there were 31 participants but not all 31 where able to fill out questionnaire 2. The total health pro-fessionals in questionnaire 2 consisted of 14 participants (12 females and 2 males). These 14 participants also filled in questionnaire 1.

Perceived usefulness

The rating of reports is considered of added value to their own written reports by 3.6 out of 5 where 5 is of complete added value. The rating of reports scored 3.4 out of 5 for the added value for their colleagues in writing medical reports. They scored 2.9 out of 5 when asked whether the provided personalized feedback assisted them in writing better reports.

(9)

9 The participants were asked whether they would keep using

the rating of reports method, they scored 3.2 out of 5. They would use this method preferably for a specified period, such as weekly, monthly or yearly. They mention in the additional comments section, that there was not that much time to rate reports. Also, some of the participants noticed that the rating of reports was not a topic of conversation, which they concluded as lack of involvement and motiva-tion of their colleagues to rate reports. They remark that a large part of the staff was on holiday or went on holiday during the experiment period.

Perceived ease of use

The participants score the rating of reports easy to use with an average 3.6 out of 5. To the question whether they the rating of reports obstructs them for their work, they scored a 2.3 out of 5, where 5 is considers as very much obstruct-ing. Eight participants said they never looked at the leader-boards and the feedback, the other six participants said that they looked at the leaderboards less than 5 times a week.

Quality of reports

The participants scored an average 2.6, out of 5 to if the quality of medical reports is improved by the rating of reports. The median was 3 with a variance of 1.49. They noticed no big improvement, but commented that it makes them more aware while writing a medical report. They mentioned that they wanted to do their best.

Feedback

The participants believe that the rating of reports is a safe way to give feedback on medical reports of colleagues. The main reason is anonymity, which gives them the opportuni-ty to be honest. The feedback was presented in a personal-ized page with bar charts, participants scored the usefulness of the bar charts with 3.4 out of 5.

Data

There were 31 participants for questionnaire 1, they were all asked if they could participate in the rating of reports. 14 participants remained for filling in questionnaire 2.

According to the data retrieved from the database, a total 24 health professionals actually participated in the experi-ment. In the first week of the experiment 16 participants rated reports and in the second week there were also 16 participants who rated reports. These 16 participants from week 2 are not all the same from week 1, 11 participants rated both in week 1 and 2. 69 health professionals’ reports have been rated, from which 21 were the participants. In week 1 98 medical reports were rated, and 175 medical reports were rated in week 2. This results in 297 ratings of reports in total of the experiment period.

Table 2 shows the percentage of stars in relation with the total amount per week. The amount of three stars given has increased by 7% in week 2 and the number of four stars has decreased by 7.4% in week 2.

Table 2 – Percentage of stars given per week

Star value Week 1 (%) Week 2 (%) %

1 2,0 2,9 +0,8

2 16,3 13,7 -2,6

3 29,6 36,6 +7,0

4 42,9 35,4 -7,4

5 9,2 11,4 +2,2

The five options (objective, derogatory comments, struc-tured, lack of detail and clear) were selected simultaneously when rating the reports with stars. When 1 or 2 stars are given to a medical report, the negative options (derogatory comments and lack of detail) are more selected compared to the reports with 3 stars or higher. When three of four stars were given to a report, it still could have derogatory comments or be lacking in detail. This is not the case when a report was given 5 stars, both in week 1 and 2 the nega-tive options were not selected.

DISCUSSION

Medical reports are communication tools to exchange pa-tient information between different healthcare profession-als. The requirements of what makes a good medical report varies from what is found in literature and what participants have mentioned in the questionnaires. The main require-ments found in literature were legible, clear, objective, contemporary, accurate and lack of derogatory comments. The requirement legible is still mentioned as a requirement in recent guidelines from 2012 to 2014. This can be solved by having an EHR, which is mostly used in The Nether-lands. The health professionals who participated in the general questionnaire named clear, short, concise, complete and relevant as most important elements for a good medical report. The participants from questionnaire 1 mentioned clear, concrete, objective, short and concise as essential requirements. The requirements from both groups of partic-ipants have more similarities than the requirements found in literature. Both groups agree on being clear, short and concise when writing medical reports.

Based on the general questionnaire and questionnaire 1, health professionals tend to be satisfied with the quality of medical reports of their colleagues. However, they mention that the quality depends on the author and on the health facility because each physician has his own style of report-ing. It is surprising that a large group of health profession-als in both groups do not know whether the quality of med-ical reports is monitored. Even more surprising is that par-ticipants mention that quality is never monitored, consider-ing that in the general questionnaire and questionnaire 1 respectively 48% and 51.6% use medical reports of their colleagues daily. Without monitoring, poor medical docu-mentation might be hard to detect and address. Both groups receive rarely any feedback about their written reports. Also, the respondents do not provide feedback regularly to their colleagues. If feedback is provided they prefer to do it

(10)

10 verbally or otherwise by e-mail. Both groups mentioned

that they would like to receive feedback regularly to keep them aware to write good medical reports.

For questionnaire 2 there were 14 participants, which were 54.8% less, compared to questionnaire 1. As observed in questionnaire 2, the participants believe the rating of re-ports is of added value for them and for their colleagues to report better. However, they were not motivated by the leaderboards to report better, over 50% never looked at the leaderboards. This might be because the rating of reports was on voluntary basis, only a small group of the depart-ments participated in the experiment which was less moti-vating and a large part of the staff was on holiday or was going on holiday during the experiment period. The partic-ipants felt that there was no improvement in the quality of medical reports, but mentioned that by rating reports and knowing that they can be rated, makes them more aware when they are writing reports. Participants mentioned that rating reports of their colleagues is a good way to provide feedback to them, because it is anonymous which gives them a chance to be honest. The participants would like to keep using the rating system but they would prefer if the whole staff were involved and that they could give each other feedback verbally in a monthly meeting for example. The duration of the experiment was two weeks. These two weeks were unfortunately scheduled in a holiday period, which resulted in a high drop-out of the participants and might have influenced their motivation and involvement to rate medical reports. If the experiment would have been in a different time of the year we could expect more health professionals able to participate. Since this rating of reports was completely new to the participants, it should have taken them some time to adjust to this method. Based on the ratings measured in week 2, which were almost doubled compared to week 1, we could expect a growth in use of the rating of reports in time. Therefore, if duration of the experiment were to be extended for a longer period, we might have measured quality improvement in medical re-ports.

CONCLUSION

Patients benefit from good medical documentation provid-ed by health professionals (Pullen, 2006). Mprovid-edical report-ing is considered as one of the basic responsibilities, but its importance is often not reflected in practice (Dehghan et al., 2013). The reasons mentioned in literature for poor documentation are lack of time and a reliance on verbal communication (Stewart, 2007).

This is also reflected in this study, participants mention the lack of time and high work pressure to be the main reasons. This, results in reports with unclear sentences, unnecessary information, spelling errors and incompleteness. The partic-ipants suggest that this could be improved with instruc-tions, feedback or a spell checker, to become more aware to write qualitatively better reports.

In this paper we aimed to investigate if the use of game design element could improve the quality of medical re-porting in free-text reports. The game elements used were; giving stars based on a 5 star rating scale, personalized feedback and leaderboards. Based on the answers given by the participants of questionnaire 2, there was no perceived quality improvement in medical reports, but during the experiment they became more aware of writing good medi-cal reports.

Based on the results, the participants want to keep using the rating of reports method, but it should be extended so it includes the whole staff of the department. They would prefer this method in combination with a weekly or month-ly meeting where all the health professionals are gathered to discuss the quality of the medical reports. The definition of what makes a medical report good in the perception of a health professional varies and should be considered to be adapted to local circumstances per health facility.

We recommend repetition of a similar experiment with a group, preferably a whole department, who will participate in the complete process of the experiment to prevent drop-outs. Also, extension of the duration of the experiment in order to possible measure quality improvement in medical reports over time. We expect more motivation if a whole community is involved in the process. Further investigation on the use of gamification as a method to improve quality in medical reporting is needed to make the ‘dull and boring activity of reporting’ (Bach, 2012, p. 25) interesting. The use of game design elements needs to be investigated and which elements work best with medical reporting. We believe that rating reports could serve as a fun and engag-ing way as an audit and feedback approach, but further research is necessary to prove its effectiveness.

ACKNOWLEDGMENTS

I offer my sincerest gratitude to my supervisors Daniel Buzzo and Dr Frank Nack from the University of Amster-dam and to my supervisor at PinkRoccade Healthcare Brian Bonouvrie.

REFERENCES

Abdelrahman, W., & Abdelmageed, A. (2014). Medical record keeping: clarity, accuracy, and timeliness are essential. Retrieved July 27, 2014, from

http://careers.bmj.com/careers/advice/view-article.html?id=20015982

Bach, C., Winckler, M., Gatellier, B., & Bernhaupt, R. (2012). Challenges for the Gamification of Incident Reporting Systems, 24–26.

Bijl, F., Van den Doel, E. M. H., Edixhoven, J., Heeg, M., Hodiamont, P. P. G., De Keizer, G., … Van Wijngaarden, G. K. (2008). Richtlijn medisch

specialistische rapportage (pp. 1–23).

Blohm, I., & Leimeister, J. M. (2013). Gamification. Business

& Information Systems Engineering, 5(4), 275–278.

(11)

11 Case Di Leonardi, B. (2009). Professional Documentation :

Safe , Effective , and Legal.

Davis, F. D. (1989). Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology.

MIS Quarterly, 13(3), 319. doi:10.2307/249008

Dehghan, M., Dehghan, D., Sheikhrabori, A., Sadeghi, M., & Jalalian, M. (2013). Quality improvement in clinical documentation: does clinical governance work? Journal

of Multidisciplinary Healthcare, 6, 441–50.

doi:10.2147/JMDH.S53252

Deterding, S., Dixon, D., Khaled, R., & Nacke, L. (2011). From game design elements to gamefulness. In

Proceedings of the 15th International Academic MindTrek Conference on Envisioning Future Media Environments - MindTrek ’11 (p. 9). New York, New

York, USA: ACM Press. doi:10.1145/2181037.2181040

Documentation Guidelines for Registered Nurses. (2012) (p.

31).

Douglas-Moore, J. ., Lewis, R., & Patrick, J. R. . (2014). The Importance of Clinical Documentation. Bulletin of The

Royal College of Surgeons of England, 96(January), 18–

20.

Edwards, M., & Moczygemba, J. (2004). Better Documentation, 23(4), 329–333.

Good Clinical Documentation - Its Imporance from legal perspective. (2013), (December), 2.

Good medical practice. (2013) (pp. 1–27).

Good Medical Practice Australia. (2014) (pp. 1–25). Guidelines for Medical Record and Clinical Documentation.

(2007) (pp. 1–16).

Hamari, J., Koivisto, J., & Sarsa, H. (2014). Does Gamification Work? -- A Literature Review of

Empirical Studies on Gamification. In 2014 47th Hawaii

International Conference on System Sciences (pp. 3025–

3034). IEEE. doi:10.1109/HICSS.2014.377 Harman, L. B., Flite, C. A., & Bond, K. (2012). Electronic

Health Records: Privacy, Confidentiality, and Security.

Virtual Mentor, 14(9), 712–719.

Heyman, B. (2014). Foursquare Committed Suicide, Signaling the End of the Gamification Fad. Retrieved July 27, 2014, from

http://www.business2community.com/social- media/foursquare-committed-suicide-signaling-end-

gamification-fad-0888331?tru=bn3ADR#CQ9A1Yg3fXGCM1ec.99 Hoerbst, a, & Ammenwerth, E. (2010). Electronic health

records. A systematic review on quality requirements.

Methods of Information in Medicine, 49(4), 320–36.

doi:10.3414/ME10-01-0038

Hurst, D. (2013). Audit and feedback had small but potentially important improvements in professional practice.

Evidence-Based Dentistry, 14(1), 8–9.

doi:10.1038/sj.ebd.6400910

Ivers, N., Jamtvedt, G., Flottorp, S., Jm, Y., Sd, F., Ma, O. B., … Ad, O. (2012). Audit and feedback : effects on professional practice and healthcare outcomes ( Review ) SUMMARY OF FINDINGS FOR THE MAIN COMPARISON, (6).

Lindqvist, J., Cranshaw, J., Wiese, J., Hong, J., &

Zimmerman, J. (2011). I’m the Mayor of My House: Examining Why People Use foursquare - a Social-Driven Location Sharing Application. In Proceedings of

the SIGCHI Conference on Human Factors in Computing Systems (CHI ’11) (pp. 2409–2418). New

York, New York, USA: ACM. doi:10.1145/1978942.1979295

Marczewski, A. (2013). What’s the difference between Gamification and Serious Games? Retrieved from http://www.gamified.co.uk/2013/02/25/gamification-and-serious-games/#.U-dbhvl_seg

Mishra, a K., Bhattarai, S., Bhurtel, P., Bista, N. R., Shrestha, P., Thakali, K., … Pathak, S. R. (2009). Need for improvement of medical records. JNMA; Journal of the

Nepal Medical Association, 48(174), 103–6.

Muntean, C. I. (2011). Raising engagement in e-learning through gamification. In 6th International Conference

on Virtual Learning ICVL (pp. 323–329).

Pullen, I. (2006). Improving standards in clinical record-keeping. Advances in Psychiatric Treatment, 12(4), 280–286. doi:10.1192/apt.12.4.280

Rapporteren in de Zorg. (2014). Retrieved May 09, 2014, from http://www.btsg.nl/infobulletin/rapporteren.html Schäfer, W., Kroneman, M., Boerma, W., van den Berg, M.,

Westert, G., Devillé, W., & van Ginneken, E. (2010). The Netherlands: health system review. Health Systems

in Transition, 12(1), v–xxvii, 1–228.

Senapati, L. (2013). Boosting User Engagement through Gamification, (november).

Starey, N. (2001). What is clinical governance ?, 1(12), 1–8. Stewart, R. (2007). Clinical Documentation - Putting the

House in Order (p. 8). Toronto.

Thomas, J. (2009). Medical Records and Issues in Negligence.

Indian Journal of Urology, 25(3), 384–388.

doi:10.4103/0970-1591.56208

Van der Veer, S. N., de Keizer, N. F., Ravelli, A. C. J., Tenkink, S., & Jager, K. J. (2010). Improving quality of care. A systematic review on how medical registries provide information feedback to health care providers.

International Journal of Medical Informatics, 79(5),

305–23. doi:10.1016/j.ijmedinf.2010.01.011 Verpleegkundigen & Verzorgenden Nederland. (2011).

Richtlijn Verpleegkundige en verzorgende verslaglegging (pp. 1–46).

(12)

12

Appendix A – Requirements of medical documentation

Table 3 – Requirements for good medical documentation

Requirements References

clear, accurately and legible, contemporary, relevant, decisions,

infor-mation given to patients, timestamp, author (Good medical practice, 2013) p. 9 clear, concise, complete, contemporary, consecutive, correct,

comprehen-sive, collaborative, patient-centered, confidential (Guidelines for Medical Record and Clinical Documentation, 2007) p. 9-15

clear, accurate, legible, structured, legible, author, chronologically (Douglas-Moore et al., 2014) accurate, clear, up-to-date, legible, relevant, information given to the

pa-tient, securely, sufficient, contemporary, accessible for the papa-tient, rele-vant, lack of derogatory remarks

(Good Medical Practice

Australia, 2014) p. 18

accurate, contemporary, objective, detailed, legible, lack of derogatory

comments (“Good Clinical Documentation - Its

Imporance from legal perspective,” 2013) careful, accurate, verifiable, systematic, complete, comprehensive,

con-cise, clear, objective, specific, legible, traceable, disambiguity, accessible for the patient, lack of derogatory comments, avoid unknown terminology

(Verpleegkundigen & Verzorgenden Nederland, 2011) p. 5-7

Correct language, legible, lack of derogatory comments, objective,

com-plete, structured, clear, up-to-date, accessible for the patient (“Rapporteren in de Zorg,” 2014) expertise, careful, objective, relevance, efficiency, consistency,

(13)

13

Appendix B – General questionnaire

1. What is your age? 2. Gender

3. Where do you work?

4. At which department do you work? 5. What is your job (in this department)?

6. I write ... reports daily on average. (multiple choice)

7. I usually write a report of a patient/client within... (multiple choice) 8. I write reports in a... (multiple choice)

9. I believe I write good reports. (Likert type 5)

10. What is, in your opinion, a good report? (open question)

11. I .. make use of reports which have been written by my colleagues. (multiple choice)

12. I believe the reports of my colleagues are useful, when I need to write a follow-up report about the same pa-tient/client for example. (Likert type 5)

13. I believe the quality of the reports which are written by my colleagues is high. (Likert scale 5) 14. How is the quality of the reports monitored on your department? (multiple choice)

15. I receive .. feedback of my colleagues about the report which I wrote. (multiple choice) 16. I give .. feedback of my colleagues about the report which I wrote. (multiple choice)

17. Do you have any suggestions to ensure and improve the quality of the reports (in another way)? (open question)

Table 4- Questions of the general questionnaire per topic

Topic Question (Q)

Writing reports 6,7,8,9,10

Use 11,12

Quality 13,14,17

Feedback 15,16

Table 5 – Results of Likert scale questions of the general questionnaire

Question Median Mean Variance

9 4 3,7 0,44

12 4 4 0,52

(14)

14

Appendix C – Questionnaire 1

1. What is your age? 2. Gender

3. At which department (location) do you work? 4. What is your job (in this department)?

5. I write ... reports daily on average. (multiple choice)

6. I usually write a report of a patient/client within... (multiple choice) 7. I believe I write good reports. (Likert scale 5)

8. What is, in your opinion, a good report? (open question) 9. How do you think you could report better? (open question) 10. I enjoy writing reports. (Likert scale 5)

11. I .. make use of reports which have been written by my colleagues. (multiple choice) 12. How do you make use of reports of your colleagues? (open question)

13. I believe the reports of my colleagues are useful, when I need to write a follow-up report about the same pa-tient/client for example. (Likert scale 5)

14. When is a report useful/not useful, in your opinion? (open question)

15. I believe the quality of the reports which are written by my colleagues is high. (Likert scale 5) 16. What influences the quality of the reports? (open question)

17. In what way can the quality of the reports be improved, in your opinion? (open question) 18. How is the quality of the reports monitored on your department? (multiple choice) 19. I would find it interesting to rate the reports of my colleagues. (Likert scale 5) 20. Could you think of a way in which you would rate reports? (open question) 21. I receive feedback of my colleagues about the report which I wrote. (Likert scale 5) 22. How often do you receive feedback? (open question)

23. How do you receive feedback of your colleagues of your written reports? (open question) 24. What do you think of this feedback? (open question)

25. I give feedback of my colleagues about the report which I wrote. (Likert scale 5) 26. How often do you give feedback? (open question)

27. How do you give feedback on your colleagues’ reports? (open question)

28. Could you think of a(n) (another) way in which you can give feedback to your colleagues? (open question)

Table 6 – Questions of questionnaire 1 per topic

Topic Question (Q) Writing reports 5,6,7,10 Personal thoughts 8,9,19,20 Use 11,12,13,14 Quality 15,16,17,18 Feedback 21-28

Table 7 – Results of Likert scale questions of questionnaire 1

Question Median Mean Variance Question Median Mean Variance

7 4 3,52 0,52 15 3 3,32 0,29

10 3 3,52 0,46 19 3 3,55 0,72

13 4 3,74 0,86 21 2 2,45 1,26

(15)

15

Appendix D – Questionnaire 2

1. What is your age? 2. Gender

3. At which department (location) do you work? 4. What is your job (in this department)?

5. I believe ‘rating of reports’ is easy. (Likert scale 5)

6. I believe ‘rating of reports’ is time consuming. (Likert scale 5) 7. I believe ‘rating of reports’ obstructs my work. (Likert scale 5)

8. I believe ‘rating of reports’ is of added value for me to report better. (Likert scale 5) 9. I rated reports... (multiple choice)

10. I believe ‘rating of reports’ is of added value to stimulate others to report better. (Likert scale 5) 11. I looked at my personal scores... (multiple choice)

12. I enjoy rating reports. (Likert scale 5)

13. What has changed with regards to the quality in the past two weeks with the ‘rating of reports’? (open question) 14. The quality of reports is improved, because of the ‘rating of reports’. (Likert scale 5)

15. What did you think of the star rating, where you could rate a report? (open question) 16. I give feedback to my colleagues about their written reports. (Likert scale 5) 17. I receive feedback about my written reports. (Likert scale 5)

18. What do you feel about this form of giving and receiving feedback? (open question) 19. I believe reading my personal feedback is useful. (Likert scale 5)

20. I believe the visualization of the feedback is good. (Likert scale 5)

21. Which elements did you find useful where you could give feedback on a report? (more answers are possible) (multi-ple choice)

22. Which elements would you have find useful to give feedback on, which you could not give feedback? (more an-swers are possible) (multiple choice)

23. I enjoy reporting more, because of the ‘rating of reports’ method. (Likert scale 5) 24. I was motivated to write better reports, because of the feedback. (Likert scale 5) 25. The feedback helps me to write better reports. (Likert scale 5)

26. I was motivated to write better reports in order to reach the ‘Top 5’ leader board. (Likert scale 5) 27. I would like to keep using the ‘rating of reports’. (Likert scale 5)

28. How often would you like to use the ‘rating of reports’? (multiple choice)

29. Do you have additional comments about the use, functionality or other matters with regards to the ‘rating of re-ports’? (open question)

Table 8 - Questions of questionnaire 2 per topic

Topic Question (Q)

Perceived usefulness 6,8,10,25,27 Perceived ease of use 5,7

Use 9,11,12,15,23

Quality 13,14

(16)

16

Table 9 - Results of Likert scale questions of questionnaire 2

Question Median Mean Variance

5 4 3,64 0,40 6 3 2,93 1,15 7 2 2,29 0,53 8 4 3,64 1,48 10 3 3,36 0,86 12 3 2,57 1,49 14 3,5 3,43 1,19 16 3 2,79 0,64 17 2 2,14 0,90 19 3 3,43 1,34 20 3 3,29 1,45 23 3 2,71 1,45 24 3 3,00 1,69 25 3 2,86 1,21 26 2 2,29 1,45 27 3 3,21 0,19

Referenties

GERELATEERDE DOCUMENTEN

SWOV PROPOSES AN ADDITION TO THE CURRENT GOVERNMENT PLANS AS SET DOWN IN THE NATIONAL TRAFFIC AND TRANSPORT PLAN (NWP).IF ALL THE ROAD SAFETY INTENTIONS OF THE NWP ARE

Bij het proefonderzoek kwamen heel wat middeleeuwse grachten aan het licht, maar niet het circulaire spoor dat op de luchtfoto’s zichtbaar is. Het is mogelijk dat dit spoor sedert

Ask the resident: what could be better and how? Add as supervisors: what can be better and how.. Despite the above additions, there are a number of modalities that are not

The purpose of this study is to explore the variability and differences of the quality of sustainability assurance over the years, and to explore if this quality

larger impact of the thin disk scale height on the fitted dark mat- ter density is probably driven by the smaller error bars on the gravitational force profile implied from the

Dit beperkt de mogelijkheden voor verwerking, omdat opsalg boven het grondwaterniveau zal leiden tot sterke verzuring Het principe van natuurlijke immobilisatie kan ook worden

TF Burgers and SJP Kruger and their political actions and personal lives have been written about in many books, but Bulpin was the first to give voice to MW Pretorius. This book

Die vrae wat derhalwe met hierdie navorsing beantwoord wil word, is eerstens hoe die kinantropometriese profiel van manlike elite-spiesgooiers daar uitsien, tweedens watter