• No results found

Peer-reviewing of mHealth applications: Requirements for peer-reviewing mobile health applications and development of an online peer review tool

N/A
N/A
Protected

Academic year: 2021

Share "Peer-reviewing of mHealth applications: Requirements for peer-reviewing mobile health applications and development of an online peer review tool"

Copied!
115
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Peer-reviewing of mHealth applications

Requirements for peer-reviewing mobile health applications

and development of an online peer review tool

Rinke Joost Riezebos

B.Sc. (Hons)

University of Amsterdam

November 2013 - October 2014

(2)
(3)

Peer-reviewing of mHealth applications

Student

Rinke Joost Riezebos, B.Sc. (Hons) Student number: 6037798 Meibergdreef 7

1107AZ Amsterdam

Academic Medical Center / University of Amsterdam UvA student number: 6037798

E-mail: r.j.riezebos@amc.uva.nl Mentor

Gunther Eysenbach, MD

Global Center for eHealth and Innovation University of Toronto

E-mail: geysenba@uhnres.utoronto.ca Phone: +1 416-340-4800 x6427

Tutor

Dr. Niels Peek / S. K. Medlock Faculty of Medicine

Department of Medical Informatics

Academic Medical Center / University of Amsterdam E-mail: n.b.peek@amc.uva.nl / s.k.medlock@amc.uva.nl Phone: +31 20 5667872

Location of Scientific Research Project Centre for Global eHealth Innovation Toronto General Hospital

R. Fraser Elliott Building, 4th Floor 190 Elizabeth Street

Toronto, ON M5G 2C4 Canada

Faculty of Medicine

Department of Medical Informatics

Academic Medical Center / University of Amsterdam Meibergdreef 15

1105 AZ Amsterdam The Netherlands

(4)
(5)

PREFACE

This thesis represents the end of my student life at the University of Amsterdam and in particular at the study Medical informatics. I have had a wonderful time studying and even though I wanted to study medicine when I started, I definitely found my place.

Medical informatics is an overwhelming, complicated, but such an interesting and innovative field and will, I think, provide us always with new challenges. Challenges are good, they make people think critically and explore themselves. This research and my trip to Canada are a perfect example of that: I couldn’t have done it without help from others.

I would like to thank Niels Peek for helping me with the first steps getting me to Canada. Thanks also for supporting me on a distance via Skype and email. Many thanks to Gunther Eysenbach for making it possible for me to come to Canada and for providing me with such an interesting and challenging research topic. Thank you Ace Medlock for taking over the role from Niels and for supporting me on a distance and at home. Especially the last weeks I got to enjoy your feedback and support, they were really appreciated.

Many thanks to Jennifer for her support on the first days after I arrived and making me feel welcome in Canada as well for her company and awesome trip at the end.

Many thanks to Josh and Eddie for their daily contribution of gezelligheid, the strolls to the coffee shops and PizzaPizza (not daily) and their views on web/app design and development. Thanks Guy and Wouter for dropping me off at the airport and the calls via hangouts. Thanks Charlotte for all your messages, Skype and Facetime calls. Keiko, thanks for your emails and Remco thanks for you late night text messages.

Many thanks to Alexandra, Katie, Ariel, Frank, Danaka, John, Denise, Jennifer, Kristina, Jake and all other old and new Canadian friends. You allowed me to have a great time and kept me “warm” during the cold winter.

Many thanks to Furore for supporting me financially for this trip. I hoped you enjoyed my blogs. Many thanks also to MI-Consultancy and the OWI-MI for their support.

Thank you mum, dad, brother and the rest of the family for your ongoing support throughout the years. Without you I couldn’t have been where I am now.

I hope this thesis will be read with much pleasure and interest, and that will evoke more questions and innovative ideas.

Rinke Riezebos

(6)

VI

SUMMARY

Background: The number mobile health applications (mHealth apps) and their use has increased over the past few years. Thousands of health apps are available in the app stores. To date, there has been little evidence of the benefits of mHealth apps and little scientific proof that they can be used safely. Extensive research and review of the apps is necessary to validate the efficacy and effectiveness for safe use.[1]–[3] In addition the large number of available mHealth apps makes it hard for consumers to select the right app. Standard distribution platforms (e.g. app stores) provide limited information and tools for consumers to differentiate and select the apps. To improve the quality of the mHealth apps and to generate more evidence (e.g. insight in use, most used app types, common elements, quality) for mHealth research the Journal of Medical Internet Research has decided to offer peer-reviewing for mHealth apps. A self-disclosure questionnaire for developers of mHealth apps has already been up since early 2013 on the JMIR mHealth site, that can be used as self-check and allowing developers to submit their app for peer-reviewing.[4]

Objectives: It was our main objective to develop an online peer-review tool to support the reviewing of mHealth apps; as part of the tool we developed a new review guideline and a scoring method to score the apps. Therefore we first had to determine the state of the art of evaluation tools (e.g. frameworks, models, organizations) for mobile health technologies and secondly to create a set of criteria that are used for the evaluation of mHealth technologies or mobile health information. Methods: Our research project consisted of three phrases. (I) We performed two online literature studies, one to search the scientific literature for frameworks, models, and theories for evaluating mHealth technology, and one for to search the Internet for existing organizations and platforms that review and certify mHealth apps or provide criteria to do so. (II) We performed two online surveys to determine the requirements and expectations of consumers and developers regarding the (content of) the online peer-review tool (i.e. review guideline and scoring method). The surveys were also used to get insight in the underlying search actions performed for health information and health apps by consumers and to determine acceptance of the peer-review tool. (III) Lastly, we synthesized our findings of the first two phases: we developed the guideline and scoring method that were implemented as an online peer-review tool.

Results: (I) From the scientific literature, we identified 44 articles and extracted 85 frameworks and models; 24 organizations, and 109 criteria. The 85 frameworks and models were divided in telemedicine and teleconsulting (9), mHealth/eHealth (36), usability (30), health literacy (3) and health information (4). From the Internet search we identified 9 review platforms, 5 certification bodies, 3 certification tools for certifying health information on the web, 4 governmental bodies that provide certification of mHealth and 2 other bodies. Five tools seemed to be useful to be used further in the development of the review guideline and scoring method in addition to the criteria we extracted from the results. (II) We received responses from 137 consumers and 23 developers and were able to identify important requirements for reviewing mHealth apps and the overall review process. We generated an overview of important websites used for the selection of health information and mHealth apps and we gained insight in their app selection process. Consumers and developers indicated that they were willing to use the tool; 71% of the consumers thought it was important to review the apps and 33.33% were willing to pay 1USD or more for a reviewed app, while developers were willing to pay 50USD for peer-reviewing their apps. (III) The results of the literature search and surveys were integrated into a review guideline and scoring method, which was then implemented as an online review tool.

Conclusion: We were successfully able to determine requirements for peer-reviewing mHealth apps and to develop a guideline and scoring method for the evaluation and rating of mHealth applications. The tools were both successfully implemented as an online functional prototype tool to be used when reviewing mHealth apps. Further research should focus on the validation of the tools and their applicability. International collaboration by experts and consumers is necessary. The tools will likely be altered and evolve over time depending on new insights, research and regulations. Keywords: mHealth, eHealth, quality improvement, reviewing and feedback, guideline

(7)

SAMENVATTING

Achtergrond: Mobiele gezondheidsapplicaties (mHealth apps) worden steeds meer gebruikt in de dagelijkse praktijk, op smartphones of tablets, door verschillende groepen gebruikers. Helaas is er nog weinig bekend over de voordelen voor het veilig gebruik van de applicaties. Daarom is er meer onderzoek nodig om de applicaties te valideren op hun werkzaamheid en effectiviteit.[1]–[3] Door het hoge aantal beschikbare apps is het lastig voor gebruikers om de juiste app te kiezen: standaard platforms bieden weinig informatie die gebruikers kunnen helpen in het selectieproces. Om de kwaliteit van de mHealth apps te verhogen en meer kennis (bijv. Inzicht in gebruik) te vergaren heeft het Journal of Medical Internet Research besloten mHealth apps te gaan reviewen. Een zelftest (en submitformulier) voor ontwikkelaars, is reeds beschikbaar op de JMIR mHealth site.[4]

Doelen: Het was ons hoofddoel om een online peer-review tool te ontwikkelen dat het review proces van mHealth apps ondersteund. Als onderdeel van deze tool hebben wij ook een nieuwe richtlijn en scoringsmethode ontwikkeld voor het evalueren van mHealth apps. Daarvoor moesten wij eerst de status quo bepalen en een overzicht creëren van tools (d.w.z. frameworks, modellen, theorieën) die gebruikt worden voor de evaluatie voor mobiele gezondheid technologieën. Daarnaast was het ook ons doel om een set te vormen van criteria die worden gebruikt bij de evaluatie van deze technologieën.

Methode: Ons onderzoek bestond uit drie fasen: (I) Twee literatuur studies werden uitgevoerd, waarvan bij één in de onderzoeksliteratuur werd gezocht naar beschikbare frameworks, modellen, theorieën, criteria etc. voor de evaluatie van mobiele gezondheidstechnologieën en bij de ander gezocht werd op het Internet naar bestaande organisaties die mHealth apps reviewen/certificeren én de criteria die ze daarbij gebruiken. (II) Twee vragenlijsten werden uitgezet onder ontwikkelaars en potentiële gebruikers van mHealth apps, om hun eisen voor de (inhoud van de) de online peer-review tool (richtlijnen en scoringsmethode) te bepalen, alsook de onderliggende zoekacties die worden uitgevoerd door gebruikers voor het vinden van gezondheidsinformatie en –apps en om de acceptatie van de peer-review tool te bepalen. (II) Op basis van de resultaten van deze onderzoeken werden de richtlijn en scoringsmethode geïmplementeerd als een online reviewtool.

Resultaten: (I) 44 artikelen werden gevonden in de onderzoeksliteratuur. 85 raamwerken en modellen, 24 organisaties en 109 criteria werden geïdentificeerd. De 85 raamwerken werden als volgt onderverdeeld: telemedicine en teleconsulting (9), mHealth/eHealth (36), usability (30), health literacy (3) en gezondheidsinformatie (4). In de online zoekactie vonden wij 9 review platforms, 5 certificerende organisaties, 3 certificatie tools voor het evalueren van online gezondheidsinformatie, 4 overheidsinstanties die mHealth apps certificeren en 2 anders. Een set van criteria die wij extraheerden uit deze resultaten en vijf tools die relevant leken werden gebruikt voor ons vervolgonderzoek. (II) De vragenlijsten waren gebaseerd op verschillende tools en resultaten van het literatuur onderzoek. 137 gebruikers en 23 ontwikkelaars namen deel. Een overzicht van belangrijke eisen voor het evalueren van mHealth apps en het reviewproces werd gecreëerd. Daarnaast creëerden wij een overzicht van belangrijke websites die gebruikt worden voor de selectie van gezondheidsinformatie en –apps. Zowel gebruikers als ontwikkelaars gaven aan de onderzoekstool te willen gebruiken: 71% van de gebruikers vond het van belang dat apps worden gereviewed, 33.33% was bereid om hier 1USD meer voor te betalen. Ontwikkelaars waren bereid om 50USD te bepalen om hun apps te laten reviewen. (III) Op basis van de resultaten van onze voorafgaande onderzoeken werden de nieuwe richtlijn, scoringsysteem succesvol geïmplementeerd als een online reviewtool.

Conclusie: We waren in staat om de eisen voor het peer-reviewen van gezondheidsapps te bepalen. Op basis daarvan werden een richtlijn en scoringssysteem ontwikkeld voor het reviewen en scoren van mHealth apps. Beiden werden succesvol geïmplementeerd als een online review tool. Verder onderzoek moet na validatie van de tool, de toepasbaarheid en effectiviteit ervan aantonen. Het is zeer waarschijnlijk dat de tools over tijd gewijzigd zullen worden op basis van nieuwe inzichten, nieuw onderzoek of (lokale) wetgeving.

(8)

VIII

TABLE OF CONTENTS

PREFACE ... V

SUMMARY ... VI SAMENVATTING ... VII TABLE OF CONTENTS ... VIII TABLES AND FIGURES ... X

1.1 TABLE OF TABLES ... X 1.2 TABLE OF FIGURES ... X

CHAPTER 1 INTRODUCTION ...1

1.1 MOBILE HEALTH ... 1

1.2 SMARTPHONES, APPLICATIONS AND MOBILE HEALTH APPS ... 1

1.3 DEVELOPMENT AND USE OF MHEALTH APPS ... 1

1.4 USE OF MOBILE HEALTH APPS ... 2

1.5 THE PROBLEM ... 2

1.6 OBJECTIVES ... 2

1.7 METHODS, MATERIALS & OUTLINE ... 3

CHAPTER 2 LITERATURE RESEARCH ...5

2.1 METHODS ... 5

2.1.1 Search for frameworks and models ... 5

2.1.2 Search for evaluating/curating organizations and platforms ... 6

2.2 RESULTS ... 7

2.2.1 Search for frameworks and models ... 7

2.2.2 Search for evaluating/curating organizations and platforms ... 14

2.3 DISCUSSION ... 16

2.3.1 Summary of the main results ... 16

2.3.2 Elements that influence the evaluation of mHealth apps ... 17

2.3.3 Strengths and limitations of the study ... 17

2.3.4 Implication for practice ... 18

2.3.5 Future research ... 18

2.4 CONCLUSION ... 18

CHAPTER 3 THE REQUIREMENTS SURVEY ... 20

3.1 INTRODUCTION ... 20

3.2 METHODS AND MATERIALS ... 20

3.2.1 Selection of the questionnaire items ... 20

3.2.2 Pre-testing and revision... 20

3.2.3 Deployment of the surveys ... 21

3.2.4 Analysis of the data ... 21

3.3 RESULTS ... 21

3.3.1 Pre-testing and revision... 21

3.3.2 The final surveys ... 21

3.3.3 Deployment and data analysis ... 22

3.4 DISCUSSION ... 27

3.4.1 Strengths and limitations ... 29

3.4.2 Other research ... 30

3.4.3 Implication ... 30

(9)

CHAPTER 4 DEVELOPMENT OF THE PEER-REVIEW TOOL THE REVIEW GUIDELINE AND SCORING

METHOD ... 32

4.1 INTRODUCTION ... 32

4.2 METHODS AND MATERIALS ... 32

4.2.1 The review process ... 32

4.2.2 Content of the review guideline ... 32

4.3 SELF-DISCLOSURE FORM ... 33

4.3.1 The scoring method ... 33

4.4 THE ONLINE PEER-REVIEW TOOL ... 33

4.5 RESULTS ... 33

4.5.1 The review process ... 33

4.5.2 The peer-review guideline ... 34

4.5.3 The scoring method ... 36

4.6 THE ONLINE REVIEW TOOL PROTOTYPE ... 37

4.7 DISCUSSION ... 38

4.7.1 Comparison with other studies ... 38

4.7.2 Strengths and Limitations ... 39

4.7.3 Future research ... 40

4.8 CONCLUSION ... 40

DISCUSSION AND CONCLUSION ... 41

STRENGTHS AND LIMITATIONS ... 41

COMPARISON WITH OTHER REVIEW APPROACHES ... 42

FUTURE RESEARCH ... 42 CONCLUSION ... 43 DISCLOSURE STATEMENT ... 43 GLOSSARY ... 46 REFERENCES ... 47 APPENDICES ... 56

(10)

X

TABLES AND FIGURES

1.1 Table of Tables

TABLE 1-USED INCLUSION AND EXCLUSION CRITERIA ... 6

TABLE 2–PUBMED SEARCH ACTION:IDENTIFIED ORGANIZATIONS IN LITERATURE REVIEW ... 9

TABLE 3-IDENTIFIED FRAMEWORKS, MODELS OR OTHER APPROACHES IN THE LITERATURE REVIEW ... 9

TABLE 4-IDENTIFIED EVALUATION CRITERIA IN PUBMED RESEARCH ... 11

TABLE 5–IDENTIFIED ORGANIZATIONS AND PLATFORMS IN ONLINE GOOGLE SEARCH ACTION ... 15

TABLE 6 -PARTICIPANT DEMOGRAPHICS CONSUMER SURVEY ... 22

TABLE 7-DEMOGRAPHICS DEVELOPER SURVEY... 23

TABLE 8-SEARCH FOR EVALUATING/CURATING ORGANIZATIONS AND PLATFORMS:SEARCH TERMS ... 57

TABLE 9-ORGANIZATIONS AND PLATFORMS:IDENTIFIED PLATFORMS ... 58

TABLE 10-ORGANIZATIONS AND PLATFORMS:IDENTIFIED REVIEW AND CERTIFYING PLATFORM CHARACTERISTICS ... 62

TABLE 11–ORGANIZATIONS AND PLATFORMS:IDENTIFIED REVIEW PLATFORM CHARACTERISTICS ... 64

TABLE 12–ORGANIZATIONS AND PLATFORMS:IDENTIFIED REVIEW SCORE RATING ELEMENTS ... 65

TABLE 13-ORGANIZATIONS AND PLATFORMS:IDENTIFIED STRUCTURED REVIEW ELEMENTS ... 65

TABLE 14-CRITERIA / ELEMENTS CONSUMERS LOOK FOR OR STEPS TAKEN WHEN LOOKING FOR HEALTH INFORMATION ON THE WEB 68 TABLE 15-IDENTIFIED ELEMENTS, CRITERIA AND STEPS USED OR TAKEN WHEN LOOKING FOR MHEALTH APPS ... 69

TABLE 16-IDENTIFIED PLATFORMS USED FOR APP SELECTION ... 70

TABLE 17-IDENTIFIED CRITERIA USED BY CONSUMERS FOR APP SELECTION OR TO BE DISCUSSED IN PEER-REVIEW-TOOL ... 71

TABLE 18-SCORING QUESTIONS ... 74

TABLE 19-SCORING QUESTIONS ... 75

TABLE 20-SCORING QUESTIONS ... 76

TABLE 21–SCORING QUESTIONS ... 78

1.2 Table of Figures

FIGURE 1-STUDY DESIGN ... 4

FIGURE 2-PUBMED LITERATURE REVIEW SELECTION PROCESS ... 8

FIGURE 3-TOP SIX OF WEBSITES USED WHEN LOOKING FOR HEALTH INFORMATION (28 PLATFORMS MENTIONED 60 TIMES TOTAL) . 24 FIGURE 4-SOURCE FOR NEW APPS ... 24

FIGURE 5-IDENTIFICATION OF IMPORTANT REVIEWER CHARACTERISTICS FOR CONSUMERS ... 24

FIGURE 6-IDENTIFICATION OF IMPORTANT REVIEW ELEMENTS FOR DEVELOPERS ... 24

FIGURE 7-UTAUT BASED QUESTIONS CONSUMERS ... 28

FIGURE 8-IDENTIFICATION OF REVIEW ELEMENTS ... 28

FIGURE 9-UTAUT-BASED QUESTIONS DEVELOPERS ... 28

FIGURE 10-THE SUGGESTED THREE-LAYER PEER-REVIEW PROCESS ... 35

FIGURE 11-PERCENTAGE OF USED MOBILE PLATFORMS ... 72

FIGURE 12-THE AMOUNT OF USD DEVELOPERS ARE WILLING TO PAY FOR PEER-REVIEWING ONE APP ... 73

FIGURE 13-FRAMEWORK FOR MOBILE USABILITY ASSESSMENT. ... 90

FIGURE 14-GENERAL APPROACH DEVELOPMENTS OF APPS ... 95

FIGURE 15-INDEX PAGE OF THE ONLINE REVIEW TOOL, SHOWING THE FIVE RECENT REVIEWS ... 102

FIGURE 16-SELF-DISCLOSURE/APP SUBMISSION CATEGORY OVERVIEW – ... 102

FIGURE 17-SCREENSHOT OF PROTOTYPE REVIEW TOOL, SELF-DISCLOSURE FORM, INPUT FIELDS. ... 103

FIGURE 18-OVERVIEW OF SUBMISSIONS, WITH SEARCH FIELD. ... 103

FIGURE 19–PEER REVIEWING OF THE APPLICATION. ... 104

FIGURE 20–OVERVIEW OF AVAILABLE REVIEWS. ... 104

(11)

The way patients deal with their health has changed over the past decades. Since the introduction of the Internet and search engines (e.g. Google) patients have been able to look up symptoms, potential treatments and other health related information. After that mobile phones made their entrance. The greatest value of the arrival of mobile phones was the ability to get in touch with people independent of where they were and what time it was. Mobile technology allows for asynchronous exchange of information. A new era began when mobile phones started to be used for health: it was the time of mobile health.

1.1 Mobile Health

Mobile health or mHealth is part of eHealth and refers to the use of mobile devices in relation to health. Different definitions for mHealth are used worldwide. The k4health.org defines mHealth as “the use of mobile technologies (including phones and tablets1) to improve public health”.[5] The

World Health Organisation (WHO) defines mHealth as a component of eHealth and defines it as “medical and public health practice supported by mobile devices, such as mobile phones, patient monitoring devices, personal digital assistants (PDAs), and other wireless devices.”[6] The United States (U.S.) Food and Drug Administration (FDA) avoids the term mHealth and merely speaks about mobile medical applications or mobile apps that are “software programs that run on smartphones and other mobile communication devices. They can also be accessories that attach to a smartphone or other mobile communication devices, or a combination of accessories and software. Mobile medical apps are medical devices that are mobile apps, meet the definition of a medical device and are an accessory to a regulated medical device or transform a mobile platform into a regulated medical device.”[7]

Mobile health can thus be categorized in multiple categories and is not limited to the use of mobile health applications, but also includes: text messaging support (often used in low-income regions like Africa), health call centers/telephone help lines, (text-) reminding services and more.[6] Recently new devices like smart-watches (that run apps and monitor health) have been created as well.[8] For the purpose of this work we defined mHealth as the use of software applications on mobile devices like smartphones or tablets.

1.2 Smartphones, applications and mobile health apps

Smartphones are mobile devices that can be used for phone calls and text messaging, but differ from regular mobile phones because of their touch screens and the ability to run software applications, or so-called apps. The first smartphone was developed by International Business Machines Corporation (IBM) and became available in 1993.[9] Since then the world of smartphones has been drastically changed and 45% of the population in the U.S. currently make use of smartphones.[10]

Applications are software packages that can be downloaded to a mobile device (e.g. smartphone) and used offline, online or both. The app can pull, push and use data from and to the Internet or download content into the app for offline use.

Like regular mobile apps, mHealth apps can run on smartphones and refer to clinical/health-related electronic tools that are used to maintain or make humans healthy, and are sometimes even seen as medical devices.[7] [11]–[13]

1.3 Development and use of mHealth apps

Multiple parties are often involved in the development and use of mHealth apps. Common parties are: (1) Clients (person or organization) that want to create or invent an app and can also be the

1 a tablet is a mobile computer device which is larger than a smartphone but smaller than a laptop or

CHAPTER 1

INTRODUCTION

(12)

Chapter 1 | Introduction

2

developers. (2) Developers or manufacturers that develop the apps and can be anybody, e.g. individuals or public institutions. (3) Consumers are the people who use the app, e.g. patients, doctors or others. They want safe, solid, applications, that keep them or others healthy, that are easy to use, can be used regularly, allow for social contact and networking, that are trustworthy and give people control over their condition.[14] Fourthly, we have (4) Certification/review organizations that certify or review apps. They try to provide more information about the apps and to enable consumers in making well-informed, trusted choices as well as to provide feedback to developers. (5) Lastly, there are governmental organizations that want to guard and control public health and therefore generate regulations and guidelines for the acceptance and safe use of mHealth apps. Examples are the U.S. Food and Drug Administration (FDA) or the Inspection for Health in The Netherlands.[11], [13]

1.4 Use of mobile health apps

Multiple types of audiences, e.g. healthy persons, patients, physicians, nurses, dieticians or other healthcare providers, can use health apps. Using the health apps empowers patients and healthy persons manage their own health.[15] In addition, in order to consult a doctor or be monitored by one, patients can be in contact via their computer or mobile device (e.g. tablet or smartphone). Health professionals, students and patients commonly use mHealth apps for health reference. Already 56% of the American adults owned a smartphone in May 2013 and previous research showed that 31% of the cell phone owners used their phone to look up or acquire health information (September 2012) versus 17% in September 2010.[10] Only in the U.S. alone 95 million Americans used their mobile phones as health tools in 2013.[16] It is expected that over 500 million people will be using mHealth applications in 2015, with growth for the future.[17] Fifty percent of the more than 3.4 billion tablet and smartphone users will at least have one mobile health application on their device by 2018.[18] Today, there are over 40.000 health apps and together in total over 100,000 fitness, health and medical related apps available in more than 60 app stores.[18], [19]

1.5 The problem

It is clear that mobile phones and mobile health apps are becoming increasingly important in life and healthcare. Some programs like the Whole System Demonstrator Program in the UK showed that the use of technology as a remote intervention could lead to reduction in emergency admissions, bed days and mortality rates.[20] Also in Norway trials showed that mHealth could generate a 50% to 60% reduction in bed days and rehospitalisation of COPD patients.[20] However, for most mobile health programs or applications there is limited evidence of benefits for the use of mHealth apps. Extensive research and review of the apps is necessary to validate the efficacy and effectiveness for safe and proper use.[1]–[3] Secondly the high number of health applications available makes it hard for consumers (e.g. patients or doctors) to differentiate the right, good-working apps, which are in line with their preferences, from the wrong ones. Standard platforms (e.g. Google Play Store) do not provide sufficient information or tools for consumers to review and differentiate the applications.

1.6 Objectives

To improve the quality of the mHealth apps and to generate more evidence (e.g. insight in use, app types, common elements, quality of the apps) for mHealth research the Journal of Medical Internet Research (JMIR) has decided to offer peer-reviewing (review/evaluation of work by other experts in same field) for mobile health applications by (health) experts. The peer-review would be published online, so other researchers could use it as input for further mHealth research or by consumers to identify quality applications that suit their requirements.

The initial idea was to create a two-tier tool that would be part of the new journal mHealth and uHealth, using a similar approach of quality/transparency-labelling as is also suggested by the MedCertain/MedCircle projects in the context of health websites.[15] Developers would be able to submit their application for peer-review after filling out an online self-certification/disclosure form

(13)

Introduction | Chapter 1

consisting of questions related to the app and the developers. This questionnaire and submission platform would form the first tier of the tool. The self-disclosure questionnaire has already been up since early 2013 on the JMIR mHealth site.[4]

The second tier would consist of peer-reviewers, who would verify the information provided by the developers, and evaluate and peer-review the application for its content and quality.

The reviews would then be added to the JMIR database, made available to the public and PubMed indexed. The developers would be provided with feedback, if applicable. In case of updates of the application developers would have to apply for a new peer-review.

The publishing company has asked to define criteria and to develop a peer-review tool that will support the process and publication of peer-reviews of those mHealth apps. Although multiple (commercial) initiatives have been launched, to our knowledge no prior guideline existed for the review and scoring of mHealth applications (analogous to the Health On the Net criteria [21]). Our main objective was to develop an online prototype peer-review tool to support the reviewing of mHealth applications. To ensure the quality of the tool, we developed a new review guideline and scoring method that can be used to evaluate mHealth apps, which were then implemented in the online tool. In order to develop the guideline and scoring method, we first determined the state of the art of evaluation tools (e.g. frameworks, models, theories, organizations) for mobile health technologies and secondly created a set of criteria that are used and important for the evaluation of mobile health technologies or mobile health information. Two literature reviews and two surveys were conducted to create those overviews. The surveys were also used for to get insight in the underlying search processes by consumers for health information and apps and acceptance towards the peer-review tool.

1.7 Methods, materials & outline

Our research project consisted of three phases – described in three chapters and shown in Figure 1. Chapter 2: First we performed two online literature studies determining the status quo of mobile health technology evaluation tools in the literature and to provide an overview of organizations that offer reviewing or certification of mobile health applications. In addition we used two literature reviews to determine criteria used for the evaluation of mobile health technologies and which could be used for our review guideline and scoring mechanism.

Chapter 3: Secondly we developed two self-administered questionnaires (one for consumers, one for developers) to survey the requirements of consumers and developers, regarding the content of the peer-reviews and the acceptance of the proposed tool.

Chapter 4: Based on the results of literature research, the surveys and other existing frameworks we developed a guideline and scoring method for peer-reviewing mobile health applications and integrated these into a functional online prototype review tool.

(14)

Chapter 1 | Introduction

4

(15)

Most app stores supported a simple form of reviewing by allowing consumers to comment and rate applications on a five-star basis. Ratings and reviews could then be used by other consumers when looking for new applications or used by developers as feedback. Yet, reviews in the app stores often lack structure and have been based on personal opinions, rather than evidence-based assessment of the applications. The lack of structure and evidence-based assessments likely do not form a potential hazard for regular applications, e.g. games, social media, but could for health apps. Health apps could directly affect the lives of the user; therefore clear information about the (intended) use and limitations of the app is important. Wrong or unclear information can cause the user to download bad apps that potentially could cause harm. Despite the lack of research and structural reviews mHealth apps are widely available and used. Another problem is that the number of apps entering the app stores is drastically growing and it is getting harder for consumers to find the right apps. To overcome these problems different parties have decided to start reviewing and assessing mHealth apps, to assure the correct use and functionality of those mHealth apps and support consumers in their search process by distinguishing the many bad applications from the good ones.[22], [23]. Every platform uses their own criteria for reviewing, scoring or certifying mHealth applications and can be dawning for consumers to decide which platform to use for the selection of mHealth apps. Because we were not able to find an overview of reviewing platforms (websites that support third-party reviewing of apps) or criteria used for the assessment of mHealth applications, we decided to perform two online search actions to determine the status quo and identify frameworks, models, theories, criteria or elements used to assess the quality of mobile health applications and the status quo of reviewing or certifying platforms (organizations that certify apps), without any limitations to a specific patient group or disease. This resulted in the following research questions:

What frameworks, guidelines, models, theories, criteria or elements are available to assess the quality of e-health, mobile health (applications) or mobile technology?

What online reviewing or certifying organizations are currently available and what criteria or approach(es) do they use?

2.1 Methods

To search for evaluation frameworks, models and theories used for evaluation of mobile health apps or mobile health, we used the research database PubMed. In order to search for implemented online review or certifying platforms of mobile apps we used the web search engine Google.

2.1.1 Search for frameworks and models

The PubMed search action was conducted between December 2013 and January 2014. The search for publications was limited to studies published until January 1, 2014. The exact search actions can be found in APPENDIX A and Figure 2.

2.1.1.1 Frameworks and models: Inclusion and exclusion criteria

Only Dutch and English articles were included in our research. Initially articles were excluded based on their title or abstract. Secondly publications were excluded from our research if full text was not available or if it was clear from the full text that the article did not meet the criteria: Articles that did not discuss or suggest the use of a general framework, model or theory for the design or evaluation of (mobile) health information technologies were excluded from our research. Articles were also excluded if they discussed the design, development and/or the evaluation of only one application or health information technology system. Also articles that merely focussed on one specific disease (e.g. diabetes) or models including mostly disease-specific criteria were excluded. Lastly, articles that solely discussed the economic evaluation of mHealth apps or articles that discussed the evaluation

CHAPTER 2

LITERATURE RESEARCH

(16)

Chapter 2 |Literature Research

6

of mHealth projects or programs on a national or international level were excluded from our research.

2.1.1.2 Frameworks and models: Selection of the studies

One reviewer (RJR) assessed the studies and determined whether they would be included in the review. The reviewer was not blinded for the journal, year of publication or institution. From the included articles we extracted (1) organizations involved in the evaluation of mHealth technologies, (2) frameworks for the evaluation of mHealth technologies, (3) criteria (as part of a framework or independently) for the evaluation of mHealth technologies (4) models for the design and evaluation of mHealth technologies and lastly (5) theories regarding the design and evaluation of mHealth technologies and grouped by one of the following focus areas: (1) Telemedicine and/or teleconsulting, (2) eHealth and mHealth, (3) Usability or (4) Other. Identified criteria were categorized into of the following areas: usability, privacy, security, confidentiality, impact, cost-effectiveness, data storage and integration, accessibility and content quality.

2.1.2 Search for evaluating/curating organizations and platforms

The second search action was performed in the Google search engine (while being signed in to keep track of the visited links with Google History) to identify relevant platforms offering (peer-)reviewing, curating, and certification of mHealth apps or criteria to assess mobile health apps. We used seventeen different search terms, shown in APPENDIX B, Table 8. Only the first ten pages with hits (ten hits per page, i.e. 100 hits) for each search action were included in our research. Since Google ranks the hits from most to least relevant (based on number of visits), we assumed that therefore the first 100 hits included the most relevant links. Hits included webpages, PDF files, YouTube videos or PowerPoint presentations.

2.1.2.1 Organizations and platforms: Inclusion and exclusion criteria

Links were excluded based on title and summary. Only websites in English or Dutch were included in our review. For each link we scanned the whole webpage for the mentioning of review platforms (in case the webpage was not of a review platform) or reviewing criteria.

The criteria excluded websites that only listed apps without any review or explanation why these apps were included in the website or websites that only listed a small number of applications and did not review applications on a regular basis (i.e. at least once a month). If the most recent review of mHealth applications was performed before July 2013, we considered the website not active and excluded it from our results. The including and excluding criteria are listed in Table 1.

2.1.2.2 Organizations and platforms: Selection of the studies

One researcher (RJR) performed the search actions and included items based on title, summary and after scanning of the page. Duplicate findings were then removed and residual studies were assessed for inclusion in our review. The review author (RJR) was not blinded in regard to institution, webpage, companies, and year of publication or authors.

Webpages were bookmarked and included in our review if they provided a list of criteria for assessing or reviewing mHealth applications, a list of third-party reviewing platforms (e.g. these platforms were discussed in a news article), or if it was the webpage of a reviewing or certifying platform which reviewed/certified mHealth applications on a regular basis, i.e. new reviews were still provided/reviewers were still active. If a list of platforms was provided on a page, we tried to visit the company webpages. They were included if they met the inclusion criteria.

Table 1 - Used inclusion and exclusion criteria

Inclusion criteria Exclusion criteria

Items were in English or Dutch Full text was available for no charge

Item discussed reviewing of mobile (health) applications or criteria for reviewing mobile (health) applications

Websites that just lists applications Personal blogs or websites

(17)

Literature Research | Chapter 2

The results were grouped into different categories: review platforms, government organizations, curators and certifying companies. We visited the platforms and identified their characteristics (e.g. platforms reviewed, background reviewers, reviewing structure). After that a description of every platform was given. In addition, we extracted information about the user groups, reviewers, type of apps, scoring approaches and criteria of the reviewing and certifying platforms

2.2 Results

2.2.1 Search for frameworks and models

The MEDLINE PubMed search action involved 6092 unique articles that were reviewed and narrowed down to a number of 229 articles based on their title and abstract. These articles were then analyzed looking at their full text and content. Forty-four articles were included in our research. An overview of the selection process is shown. Two articles provided an overview of existing frameworks available and no theories were found. Other articles were categorized per focus area as shown in the selection process in Figure 2 and Table 2-Table 4. The findings will be discussed in the following sections.

2.2.1.1 Telemedicine and/or teleconsulting

Five articles belonged to the focus area of telemedicine and teleconsulting. The articles described nine frameworks for the design and evaluation of telehealth programs and applications. Three of the frameworks focussed on overall program evaluation.[24], [25] In one article [26] the normative expectations of healthcare technology acceptance and telehealthcare evaluation models were discussed. The authors reviewed thirteen different models and their elements for the evaluation of telehealthcare. The found frameworks are shown in Table 3.

2.2.1.2 Mobile health and/or eHealth

We identified 23 articles related to the design or evaluation of mobile health or eHealth. One article [27] provided an overview of discussed reviewing and certifying organization. Eighteen articles mentioning twenty-five different frameworks or approaches. Two articles [28], [29] discussed the same approaches: multiphase optimization strategy (MOST) and sequential multiple assignment randomized trials (SMARTs). The authors of [30] determined the status quo of eHealth frameworks. They found sixteen different frameworks. Eleven articles provided criteria for the evaluation of eHealth systems. Criteria were either part of a described framework, criteria used for comparison of other platforms [31] or results of a study performed to identify criteria.[32] Criteria were not only used after development of the eHealth system but also during development. We listed the found criteria in Table 4. One article of Albrecht et al. was found that provided a set of criteria for the evaluation of mHealth focussing at multiple domains e.g. disclosure conflict of interest, data handling [33]

2.2.1.3 Usability

We identified nine articles that discussed the usability evaluation of health IT systems. Important models identified were Technology Acceptance Model (TAM), Nielsen’s ten heuristics, Shneiderman’s eight rules for interface design, Heuristic Evaluation (HE), the Cognitive Walkthrough (CW), the Think Aloud (TA), Norman’s theory of action, the Reader-Leader framework, TOFHLA, Health Literacy Online guide. An overview of the models is shown in Table 3. One article discussed the use of the Health IT Usability Evaluation Model (Health-ITUEM) for evaluating (the usability of) mHealth technology. The model was based on multiple usability evaluation theories and was developed to overcome the downsides of other evaluation tools.[34]

2.2.1.4 Health information

We found two organizations that provided a set of criteria used for the evaluation of online health information: DISCERN and the Health On the Net Foundation (HON).[31], [35] In addition we found one algorithm called FA4CT (or FACCCCT) that can be used in combination with the CREDIBLE criteria to assess and filter health information on the net.[36]

(18)

8

(19)

Literature Research | Chapter 2

Table 2 – PubMed Search Action: Identified organizations in literature review

Domain Platform or organization Mobile health

and eHealth

The US Food and Drug Administration (FDA) [27], [37]

UK Medicines and Healthcare Products Regulatory Agency (MHRA) [27] European Medical Device Directive (MDD) [27]

NHS Apps Library [27] Medical App Journal [27]

Happtique Health App Certification and standards program [27] Continua Health Alliance [27]

CDC [37]

Office of National Coordinator for HIT - DHHS [37] IOM [37]

US Office of Market Research and Evaluation (OMRE) [37] Health

information

Healthfinder[31] HealthInsite [31] NHS Direct Online [31]

Health Summit Working Group [31] HON code of conduct [31]

Internet Healthcare Coalition (code of ethics) [31] DISCERN on the internet [31]

Hi-Ethics Principles [31]

American Accreditation HealthCare Commission [31] TRUSTe [31]

Council of Better Business Bureaus [31]

OMNI Advisory Group for Evaluation Criteria [31]

Collaboration for Critical Appraisal of Information on the Net (Medcertain) [31]

Table 3 - Identified frameworks, models or other approaches in the literature review

Domain Identified frameworks or models or other approaches Telemedicine, teleconsulting

etc.

Theoretical triangulation as basis for assessing the impact [24] Model for Assessment of Telemedicine applications (MAST) [25] EUnetHTA Model [25]

Donabedian’s three-dimensional model for telemedicine evaluation in combination with large scale data [24] Esser’s and Goossen’s framework for patient-provider teleconsultation [38]

Transaction for cost economics model for telemedicine evaluation [39]

Bashsur’s three-dimensional model [39]

Bashsur’s evaluation model for telemedicine [39]

Ho’s framework for evaluation of teledermatology applications [40]

mHealth and eHealth KDS Framework for e-Health evaluation [41]

Continuous Evaluation of Evolving Interventions (CEEI) [28] Multiphase Optimization Strategy (MOST) [28], [29], [42] Sequential Multiple Assignment Randomized Trials (SMARTs) [28] , [31]

(20)

Chapter 2 | Literature Research

10

Online commenting facility [43] Tracking system use: CHECKPOINT [43] Telephone interviews [43]

Video-based usability testing [43]

Quality of Experience (QoE) evaluation tool [44] Generic Component Model (GMC) [45]

App-synopsis [33]

Roger’s diffusion of innovation models [46]

Green and Kreuter’s PRECEDE-PROCEED model [46] RE-AIM framework [46]

CURRES approach [46] Data integration models [47] Implementation models [47] Adoption models [47] Reimbursement models [47]

Catwell and Sheikh’s continuous systematic evaluation model [48]

Evans’ conceptual model for evaluation of mHealth [49] CONSORT(-EHEALTH) criteria [50], [51]

Standards model by Olds et al. and Flay et al. [29] multi-factorial design analysis [29]

Data farms [29]

Long’s three way evaluation framework [52] Whittaker’s evaluation approach: [53]

Formative research, using focus groups and online surveys Pretesting: focus groups, surveys and interview

Pilot study: small and non-randomized RCT: pragmatic community based

Qualitative research: semi-structured interviews

Evaluation of implementation impact: phone/online surveys Semi-structured interviews

Medical Research Council (MRC) framework for development, evaluation and implementation of complex interventions[54]

Usability Health-ITUEM [34]

Technology Acceptance Model [34] Nielsen’s ten heuristics [34]

Shneiderman’s eight rules for interface design [34] ISO 9241-11 [34]

Usability decompositions [34]

Norman’s seven principles for design [34] Single case experiments [55]

Reversal

Multiple-baseline Alternating treatment Changing criterion Combined

Randomized Controlled Trial (RCT) [56] System Usability Scale questionnaire [56]

Feasibility study (frequency responses and open questions) [56] Heuristic evaluation [57], [58]

Cognitive walkthrough [57], [59] Think Aloud [57], [58]

(21)

Literature Research | Chapter 2

Norman’s theory of action [59] Field usability with [59]

Ethnography work Education Participant observation Interaction analysis Kushniruk’s approach [59] Cimino’s approach [59] Reader-to-Leader framework [37]

Li and Bernoff’s social technographic profiling [37] Porter’s funnel [37]

W3C Web Content Accessibility Guidelines [37] Theofanus report usability guidelines [37]

Post-study System Usability Questionnaire (PSSUQ) [58] Kharazzi’s framework for comparison of mPHRs [60] Shin’s conceptual quality control framework [61] Sittig’s sociotechnical assessment guide [62]

Health literacy TOFHLA [63]

Health Literacy Online Guide [63] Monkman’s Health literacy tool [63] Health information FA4CT (FACCCT) + CREDIBLE criteria [36]

Health on the Net Foundation criteria [35]

Criteria of the Health Summit Working Group [35] DISCERN tool [35]

Table 4 - Identified evaluation criteria in PubMed research Elements

Design phase Involvement of end-users [27]

Use of framework for needs assessment/requirements and Priority [64] [65]

Involvement of medical professional [66]

Attribution [31]

Authorship information [31]

Credentials [27]

Conflict of interest [27]

Pictures of site owners or authors [36]

Publisher information [66]

Disclosure [31]

Ability to send owner/author email / contact owner/author [36] Outbound links to further recommended websites [36] Content (quality)

[27], [44], [47]

References – critical evaluation with meta-analysis techniques [27], [64], [36]

Up-to-date? [27]

Summary of published literature [64]

Text should sound plausible or scientific [36]

Technical Supported mobile devices and desktop capability [40]

Software application [65]

Dependency insecurities and failures [65]

(22)

Chapter 2 | Literature Research 12 Design insecurities [65] Implementation securities [65] Deployment environment [65] Electronic threads [65] Physical threads [65]

Images (size, upload etc.) [40]

Integration EHR integration [40]

Billing integration [40]

Scalability [40]

Open infrastructures [28]

Ability import/export data [60]

Data integration models [47]

Data standards, mechanisms and protocols [65], [28]

Secure data exchange [9]

Accuracy [61]

Granularity [61]

Application [61]

Synchronization [61]

Information loss (by aggregation) [61]

Physiological soundness [61] Contextual soundness [61] Semantically standards [67] Terminological standards [67] Exchange standards [67] Technical standards [67] Data security [33], [67], [60] Patient authentication [61]

Data handling mechanisms & protocols (collection, storage, backup, amount, times, location stored data)

[40], [33], [65],

Monitoring & revision protocols [65]

Data other Purpose of data collection [33]

Beneficiaries of collected data [33]

User’s rights of information sorted data and option to withdraw previous approval

[33]

Deletion option and time it takes [33]

Possibilities to disable data collection or transfer [33] Security

[27], [64], [65], [44], [47], [28], [67]

Methodological (Design) Security [65]

User’s role/potential cause confidentiality breach [65]

Procedural security [65]

Training [65]

Monitoring and revision of protocols [65]

Information security [65]

[40]

(23)

Literature Research | Chapter 2 Health Literacy [27], [47], [67] Performance appearance [44] Learning [44] Precision [44] User-friendliness [36], [67] User competence [67] Error prevention [34] Completeness [34] Memorability [34] Information needs [34] Flexibility/Customizability [34] Learnability [34] Performance speed [34] Competency [34] Other outcomes [34] User-interface [36], [62]

Understandable and professional writing [36]

Health literacy [27] Screens [63] Content [63] Display [63] Navigation [63] Interactivity [63] Accessibility [27], [47], [36], [67] Readability [27] Effectiveness [64] Efficiency [64] Availability [44] Instructions [31] Training [65]

Functionality Site map [36]

Search capabilities [36]

Speedy interface [36]

Upload photos [60]

Print out summary [60]

Additional information / Malicious

Price (range) [66]

App category [66]

Health specific scope [31]

Type of instrument [31] Rating Average rating [66] Number of ratings [66] Funding or sponsorship [27] Ethics [27], [28], [64], [67]

(24)

Chapter 2 | Literature Research 14 Controlling authorities or third-parties Organization characteristics [62]

State and Federal rules [36],

[62] Hardware & Software [62] Personnel [62] Workflow and communication [62] Monitoring [62]

Impact Impact of health outcomes [47]

Other Reliability [31] Social elements [64] Organization [64] Confidentiality [28], [47], [67] Privacy [27], [28], [47] Feasibility [28] Economic [64], [67] Cost-effectiveness [47], [40] Safety [67] Quality [67]

Implementation, adoption and reimbursement models [47] Continuous evaluation, involvement of stakeholders [64] 2.2.1.5 Other

Five articles discussed eHealth related evaluation frameworks or approaches, other articles discussed the key principles and challenges involved with the development and evaluation of patient-centered technologies for disease management and prevention [67], the evaluation of features and functionality of multiple mobile health records (mPHRs) using ten mPHR specific data elements [60], a framework for remote health monitoring [61], an eight-dimension model for the evaluation of design, development, implementation, use and sustainability/monitoring of health IT [62] and security considerations for e-mental health interventions.[65]

2.2.2 Search for evaluating/curating organizations and platforms

The Google online search action to identify organizations that evaluate/curate mobile health apps considered more than 1700 webpages (1700 Google hits and the following webpages). We included 23 organizations in our review based on the inclusion and exclusion criteria. The hits consisted of nine review platforms, five certification bodies (including one notified body), three certification tools for certifying health information on the web, four governmental bodies that provide certification of mHealth and two other bodies. Table 5 provides an overview of the different identified platforms, APPENDIX C provides an overview of the found platforms with a description. Table 10 (p.62) provides an overview of the characteristics of found reviewing and certifying platforms. Table 11 (p. 64) shows the identified characteristics of included review platforms and Table 12 (p. 65) provides an overview of the identified scoring elements. Lastly, Table 13 (p.65) gives an overview of the

(25)

Literature Research | Chapter 2

different review elements that are structurally included on each platform. In the following sections we will shortly discuss our results per category.

2.2.2.1 Review platforms

Online webpages were considered review platforms if they listed mobile health applications and provided reviews of mHealth apps or scored the apps based on certain criteria. All the found review platforms offered free reviews of the applications. Sometimes a user had to register on the platform before reviews were available.[68] Three of platforms offered reviews or advice written by medical experts, based on their experience in the different healthcare settings.[68]–[70] Other platforms offered reviews by end users [71], [72] or healthcare communities.[14], [73], [74]

The review platforms supported reviews for different type of platforms or software: mPHRs

[71],

iPhone, Android devices, iPad, BlackBerry [72], [69] and medical software packages (that include mobile apps) [75], [76] The review structures differed per review platform. The platforms offered a description of the application [14], [70], either written by the reviewer or directly downloaded from the app store(s).[77], [78] Two platforms offered a clear list of features of the application [14], [72] and three also provided images or screenshots.[14], [72]–[74] Five platforms offered a rating of the application either on a scale from 1-5 (n=4) or 1-10 (n=1). Five platforms allowed consumers to rate applications or comment on reviews themselves. Elements of the review platforms and elements in their reviews are shown in Table 9 of APPENDIX C.

2.2.2.2 Certifying platforms

We found five platforms that certified mHealth applications. The platforms are shown in Table 5 and with a description in Table 9, APPENDIX C. Two platforms made their criteria for evaluation available online [79],[80]–[82] The platforms focussed on operability, privacy, security, content, technical aspects, usability, maliciousness, vulnerability and reliability.

2.2.2.3 (Certification and) curating

The curating platforms were all commercial and assessed and certified mHealth apps after which the certified apps were kept in a database to be used by health professionals. We identified three platforms that offered curating of mHealth apps.[83]–[85]

In their certification process they look at six categories of standards: privacy & security, content, technology and interoperability, data and data transfer, behavioural science and consumer engagement.[83] One curating platforms was part of an insurance company.[85]

Appthority [86] offered app risk management service focussing on apps in general and provided an automated service that assessed applications. Appthority analyzed the binary files of applications by performing static code analysis, dynamic analysis with emulators, and behavioural analysis. All the tests were automatic. Feedback was given in the form of an Appthority trust score and by reporting on security risks, privacy issues, intellectual property exfiltration, encryption use, share data with ad networks, app URL traffic, developer reputations and access to 3rd party cloud storage.

Table 5 – Identified organizations and platforms in online Google search action

Review platforms Certifying platforms Certification and

curating

Medical app journal [87] Happtique [80]–[82] Goyou Cigna and Social Wellth [83]

HealthTap – Rx

[68]

Intertek [88] Appthority [86]

PHRs Today

[71]

ICSA Labs [89] IMS Health [90]

iMedicalApps

[69]

AT&T [91]

SoftwareAdvice

[75]

KLAS Research [79] My health apps

[14]

Artsennet

[70]

(26)

Chapter 2 | Literature Research

16

2.2.2.4 Governmental criteria organizations

We found four governmental organizations that provided certification for mHealth apps. The U.S. Department of Veteran’s Affairs requires all mobile health apps that are used in VA, to be assessed in a certification process to ensure usability, reliability, privacy, security and safety.[93], [94] The international Medical Device Regulators Forum (IMDRF) [95] is an international group of medical device regulators who together try to create a foundation for Global Harmonization Task Force on medical devices (GHTF). They assess software that is considered a Medical Device like some mHealth apps.[96]

The Australian Government - department of Health / New Zealand – Therapeutic Goods Administration [97] is a governmental organization that regulates medical devices including medical software that is considered to be a device. Lastly, Info-way Canada [98], [99] offered certifications services for consumer health applications. The standard basis of certification consisted of the assessment on privacy, security, interoperability, and management.

2.2.2.5 Miscellaneous

In addition to the previous mentioned results we found two organizations that were not directly related to the assessment of mHealth apps, but to the evaluation/assessment of online health information and therefore included here.

The Health On the Net Foundation (HON) [21], [100] offered self-certification to improve the quality of online health information. The HONcode of conduct offered a multi-stakeholder consensus on standards and certifies websites using 8 principles: authority, complementarity, confidentiality, attribution, justifiability, transparency, financial disclosure and advertising.[101]

The Information Standard was a certification program for organizations offering health related information on their websites. The programme was commissioned by NHS England and run by Capita. The Information Standard tries to achieve that information on websites who are offering health-related information is clear, accurate, impartial, evidence-based and up-to-date.[102]

All platforms are shown in Table 9, APPENDIX C.

2.3 Discussion

2.3.1 Summary of the main results

Our review resulted in a broad overview of the frameworks, tools, platforms, criteria and other approaches available for the (quality) evaluation of eHealth (related) systems. We identified 44 articles and extracted 85 frameworks and models; 24 organizations and 109 criteria from the scientific literature. We found a number of articles, organizations and models that seemed important for our peer-review tool, because they provided a set of criteria for the evaluation of health information or mHealth apps and were not disease-specific: We found four organizations that offered peer-reviewing or certification of mHealth apps. [27] Only one of them provided a full list of used criteria, but use of the list by others was not permitted.[103] One article provided a set of criteria for the disclosure of information about mHealth apps by developers. The tool was diseases independent and focussed on multiple domains, e.g. data handling, privacy, conflict of interest.[33] In addition we found two organizations (DISCERN and HON) [35] and one model (FA4CT) [36] that provided a set of criteria for the evaluation or assessment of online health information. Lastly we found two articles about the CONSORT(-eHealth) criteria [50] [51], used to standardize evaluation reports of web-based and mobile health interventions. Lastly, we found an article explaining the Health-ITEUM; a model that can be used for the usability evaluation of mHealth apps.

From the Internet search we identified 9 review platforms, 5 certification bodies, 3 certification tools for certifying health information on the web, 4 governmental bodies that provide certification of mHealth and 2 other bodies. The review platforms did not provide a set of criteria on their website, but we were able to extract some of them from their websites and the reviews. Two of the certification organizations did provide a list of their used criteria online, but use of the list by others was not permitted.[80]–[82], [79] Other organizations/platforms did not provide a list of criteria or

(27)

Literature Research | Chapter 2

were governmental organizations and this had a other level of criteria, i.e. legislations. Lastly we found two organizations that used a set of criteria for the evaluation of online health information.

2.3.2 Elements that influence the evaluation of mHealth apps

Mobile health evaluation approaches stem back from the evaluation of telemedicine systems. Telemedicine, the use of telecommunication technology to provide medical assistance over a distance, has already been available for a long time. One of the first articles discussing the use of telemedicine systems dates from 1950.[104] A lot of research has been performed in this area and frameworks or methods for the evaluation of these systems are available. The terms telemedicine and eHealth are often used together. Secondly, we could say that mHealth evaluation has been influenced by the approaches that widely have been used for the evaluation of health information websites, e.g. the use of quality seals or the DISCERN Tool [105], or the HON criteria [21]. Lastly, mHealth evaluation approaches are influenced by the evaluation of computer systems, particularly the usability evaluation approaches.

2.3.2.1 Approach to evaluation

There is a shared view in the articles that evaluation should already start during the design phase of an application or system and must continue until after the deployment and actual use of the system. It is important that the stakeholders and end-users are involved in all the stages of development and evaluation, so problems could be tackled as early as possible, however this is no guarantee that the application will be successful and effective. That is why pilot evaluations and randomized clinical trials (RCTs) were often mentioned as evaluation methods in the methodological frameworks. Trials were combined with a form of usability evaluation, either via a questionnaire, by using the Think Aloud [57] [58] method or performing a Cognitive Walkthrough [57] [59].

RCTs have been considered the gold standard in healthcare for evidence and used for the evaluation of drugs or computer systems. But not many RCTs have been performed for the evaluation of these systems. Arguments of opponents of RCTs say are too slow (3-4 years) in the fast moving commercial world of mobile health development. The technology changes too fast and by the time the trial is done, new products are already on the market.[106]

To deal with the fast moving world, companies have been launched to separate the bad applications from the better ones. They were launched to make revenue or as initiative by patients’ or healthcare workers’ organizations. Although comparable criteria were used by the platforms, they did not all use the same criteria or discussed them in the same way (see Table 4, Table 12 & Table 13). None of the reviewing platforms appeared to perform a constructive, evidence-based assessment of mHealth applications, but instead reviews available were based on the personal impressions and experience of the reviewers in the field. It remained unclear if pilot tests or RCTs were performed. This is different for the governmental organizations that sometimes do perform pilot tests or RCTs. Certifying platforms and curating platforms seemed to have a clearer list of elements for which the applications are tested. We found many organizations and initiatives for reviewing mHealth applications, but we found only two certifying organization [80]–[82], [79], which published their full set of criteria, and additionally one published set of criteria in a scientific journal.[33] Few clear guidelines or best practices are thus available, which leads to undesirable and questionable situations, e.g. how do the other platforms review and certify their apps? Does only one person perform the reviews? How much is their opinion worth? What does their experience tell us about the quality or essence of the review?

2.3.3 Strengths and limitations of the study

To the best of our knowledge this was the first review performed with the goal of creating an overview of existing frameworks, theories, organizations, and platforms available for the evaluation of mHealth applications. We found a large number of platforms, frameworks and criteria available for the evaluation of mHealth apps. We categorized and extracted criteria from the articles included in the PubMed search actions and from the selected review platforms of the Google Search Actions. The list of criteria give insight in what the community considers important when reviewing mHealth

Referenties

GERELATEERDE DOCUMENTEN

The conceptual framework that was designed based on the context analysis cannot be used to evaluate reputation systems effectively without validating its features.. To expand and

Het peer review team beveelt ons aan om zowel binnen als buiten onze organisatie te verduidelijken hoe wij onze unieke rol en mandaat als de hoogste controle- instantie van

We also take a look at the role peer review has in (recent) mainstream philosophy, which we identify with the kind of philosophy that has dominated prominent philosophy journals

Er zijn kansen zijn voor alternatieve verwerking van reststromen: enerzijds doordat overheid en maatschappelijke organisaties potentie zien in de toepassing van biomassa

In SWOV-rapport R-93-13 worden genoemde verzamelingen gegevens onder de loep genomen en wordt hun gebruiksmogelijkheden voor onderzoek en beleid beschreven. Vervolgens

The goals of the first study were (1) to see whether author ’s ratings of the transparency of the peer review system at the journal where they recently published predicted

We have set out the performance of these devices in three im- portant benchmarks; Essential coin validation, which gauges performance of coin validation upon coin withdrawal or

Een andere hulpzoeker geeft expliciet aan zijn 3e product van hetzelfde type uit te pakken, nadat de voorgangers 2 jaar hebben gefunctio- neerd; hieruit wordt duidelijk dat