• No results found

Visualizing uncertainty in drug checking test result reports during the opioid crisis: a design study

N/A
N/A
Protected

Academic year: 2021

Share "Visualizing uncertainty in drug checking test result reports during the opioid crisis: a design study"

Copied!
172
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

by

Jorin Diening Weatherston B.Seng, University of Victoria, 2017

A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of

Master of Computer Science

in the Department of Computer Science

c

Jorin Diening Weatherston, 2020 University of Victoria

All rights reserved. This thesis may not be reproduced in whole or in part, by photocopying or other means, without the permission of the author.

(2)

Visualizing Uncertainty in Drug Checking Test Result Reports During the Opioid Crisis: A Design Study

by

Jorin Diening Weatherston B.Seng, University of Victoria, 2017

Supervisory Committee

Dr. Margaret-Anne Storey, Supervisor (Department of Computer Science)

Dr. Charles Perin, Committee Member (Department of Computer Science)

Dr. Dennis Hore, Committee Member (Department of Computer Science)

(3)

ABSTRACT

Potent opioids (fentanyl) are entering recreational drug manufacturing processes, sometimes without the knowledge of people who use drugs. This is contributing to tens of thousands of accidental overdose deaths each year. Recreational drug checking services during the opioid crisis face unique challenges in delivering test results to peo-ple who use drugs. These challenges are caused by uncertainties in drug composition, the chemical analysis processes used, and the complex contextual considerations of drug checking services themselves. In this thesis I describe a design study in collab-oration with a local drug checking service to explore visualizing uncertainty in drug checking test result reports. From this research we generate a number of research con-tributions. I have identified the new and impactful application domain of visualizing uncertain drug checking test results. Within this application domain I conducted a design study to generate a test result report that suits the problem context and ac-complishes the design goals described by the drug checking service stakeholders. This design study generates reflective considerations on conducting design studies in this context, intermediate design artifacts, and finally a test result report software applica-tion. The design study also led to the identification of a new uncertainty visualization design space for proportional charts. I apply that design space in the generation of some intermediate design artifacts. I position these research contributions within the drug checking and uncertainty visualization research fields, and describe our planned future work in the hopes that future research will positively impact the application domain.

(4)

Contents

Supervisory Committee ii

Abstract iii

Table of Contents iv

List of Tables viii

List of Figures x Acknowledgements xiv Dedication xv 1 Introduction 1 1.1 Motivation . . . 1 1.2 Contributions . . . 3 1.3 Thesis Layout . . . 4 1.3.1 Part One . . . 4 1.3.2 Part Two . . . 5 1.3.3 Part Three . . . 5 2 Methodology 6 2.1 The Design Science Paradigm . . . 6

2.2 Information Location and Task Clarity in Design Studies . . . 7

2.3 The Relevance, Rigour, and Visual Cycles . . . 9

2.4 Presenting Contributions using Technological Rules, Novelty and a Vi-sual Abstract . . . 9

2.5 Design Study Structure . . . 10

(5)

2.6.1 Five Design Sheet Methodology . . . 12

2.6.2 Design Space Exploration Process . . . 14

3 Drug Checking Service Context 16 3.1 Stakeholder Types . . . 16

3.1.1 Clients . . . 18

3.1.2 Harm Reduction Workers . . . 19

3.1.3 Chemical Analysts . . . 21

3.2 Drug Testing Systems and Test Result Formats . . . 22

3.2.1 Component and Percent Composition Test Results . . . 23

3.3 Uncertainty in the Drug Checking Service . . . 24

3.4 Existing Test Result Delivery Methods . . . 27

3.4.1 Literature Concerning the Communication of Drug Checking Test Results . . . 29

4 Requirements Analysis 32 4.1 Requirements and Acceptance Criteria Gathering Processes . . . 32

4.1.1 Semi-Structured Interviews . . . 33

4.1.2 Interview Protocol . . . 34

4.1.3 Design Feedback Meetings . . . 34

4.1.4 Design Feedback Survey . . . 35

4.2 Requirements and Acceptance Criteria . . . 37

5 Design Goal One: Visualizing Percent Composition and Compo-nent Composition 40 5.1 Improving Drug Checking Test Results Delivery Using Charts . . . . 40

5.2 Selecting Appropriate Charts . . . 41

5.2.1 Percent Composition Chart . . . 43

5.2.2 Component Composition Chart . . . 44

6 Design Goal Two: Visualizing Uncertainty in Percent Composition 46 6.1 Empowering Clients with Uncertainty and Confidence Data in Test Results . . . 46

6.2 Uncertainty in Percent Composition . . . 48

6.2.1 Characterizing Uncertainty in Percent Composition Test Results 50 6.2.2 Uncertainty Visualizations for the Public . . . 52

(6)

6.2.3 Design Guidance for Visualizing Uncertainty . . . 52

6.3 Unquantified Uncertainty Design Space . . . 55

6.3.1 Preliminaries . . . 56

6.3.2 Step 1 - Breakdown . . . 56

6.3.3 Step 2 - Dimensions . . . 57

6.3.4 Step 3 - Systematic Exploration . . . 58

6.3.5 Step 4 - Application . . . 59

7 Design Goal Three: Visualizing Confidence in Component Com-position 62 7.1 Confidence in Component Composition . . . 62

7.1.1 Characterizing Confidence in Component Composition Test Re-sults . . . 64

7.1.2 Design Guidance for Visualizing Confidence in Component Com-position . . . 65

7.1.3 Generating Confidence Indicator Design Alternatives . . . 66

8 Additional Report Design Goals 69 8.1 Design Goal Four: Digital and Handout Reports Must be the Same . 69 8.2 Design Goal Five: The Visual Report Must Present Basic Drug Check-ing Service Information . . . 71

8.3 Design Goal Six: The Visual Report Must Present Descriptors of the Drug Sample . . . 72

8.4 Design Goal Seven: The Visual Report Must Highlight Fentanyl In the Test Results . . . 74

8.5 Design Goal Eight: Chemical Analysts Must be Able to Interpret the Test Results . . . 76

8.6 Design Goal Nine: The Visual Report Must Explicitly Disclaim Itself 77 8.7 Final Report Design . . . 78

9 Implementation 82 9.1 Artifact Deployment Iteration: Make Stage . . . 82

9.2 Artifact Deployment Iteration: Deploy Stage . . . 84

9.3 Using the Application . . . 85

(7)

10.1 Contribution 1: Design Study . . . 91

10.2 Contribution 2: Design Space . . . 97

10.2.1 Evaluating the Unquantified Uncertainty Design Space . . . . 98

10.2.2 Design Space Consistency . . . 99

10.2.3 Design Space Completeness . . . 100

10.2.4 Using the Unquantified Uncertainty Design Space as a Visual-ization Researcher . . . 102

10.2.5 Understanding Designs Produced by the Unquantified Uncer-tainty Design Space as an End User . . . 103

10.3 Contribution 3: Visual Report . . . 103

10.3.1 Dominant Design Trade-offs . . . 103

10.3.2 Transferability . . . 104

10.3.3 Limitations . . . 105

11 Future Work & Conclusions 107 11.1 Future Work . . . 107

11.1.1 Evaluation of the Drug Checking Test Result Digital Report and Handout Report . . . 107

11.1.2 Unquantified Uncertainty in Proportional Charts Design Space 108 11.2 Conclusions . . . 109

Bibliography 111

A Drug Checking Service Flow 117

(8)

List of Tables

Table 2.1 Design iterations and deployment iteration with primary activities. 13 Table 3.1 Chemical analysis methods and type of drug checking test results

produced. . . 23 Table 3.2 A depiction of components and their representative hit-scores.

Hit-scores go from low to high confidence. Ratios of 1 represent a complete identification. The SERS scale is qualitative, and based on an unexposed internal set of thresholds; chemical analysts only get to see the colours. . . 24 Table 3.3 This is an example of an FTIR test output. It shows a list of

components and the percent of the sample’s composition they each comprise. . . 24 Table 3.4 In-person communication of results directly to clients across 31

services. Note that some services deliver results in multiple ways. [4] . . . 27 Table 4.1 This tables presents the connections between my stakeholders,

information gathering processes, design constraints, and design goals. The Requirements and Acceptance Criteria column includes short descriptions of design guidance gathered at a high, medium and low level of abstraction. The Design Goals col-umn indicates which design goals satisfy the requirements and acceptance criteria. The Source column indicates where require-ments and acceptance criteria were collected from. The Process column indicates which technique was used to collect the require-ments and acceptance criteria. I describe the codes below the data. . . 38

(9)

Table 5.1 A prioritization of common visualization tasks to select charts for each data type [45]. The original set of tasks is in the left column, rank numbers for my orderings are in the middle-left column, task ordering for percent composition is in the middle-right column, and task ordering for component composition is in the right column. 42

(10)

List of Figures

Figure 2.1 Sedlmair et al.’s[47] diagram depicting information location and task clarity. Design projects can be placed within these axes to understand whether or not a project could be conducted using a design study. . . 8 Figure 2.2 The complete research timeline composed of design and

deploy-ment iterations, and five design activity stage types. . . 12 Figure 3.1 A diagram of service flow within Substance with stakeholders

and stages. . . 17 Figure 3.2 A row from the EcstasyData.org database. Accessed: 01/06/2019 28 Figure 3.3 A sample test result report from the EcstasyData.org database.

Accessed: 01/06/2019 . . . 29 Figure 4.1 The complete research timeline with stakeholder feedback and

requirements analysis processes highlighted. . . 33 Figure 5.1 The percent composition pie vs cake charts (left), and the

com-ponent composition table chart (right). . . 43 Figure 6.1 The percent composition pie and cake charts with 100% axis. . 47 Figure 6.2 Decomposition of the pie and cake chart into visual marks. . . . 57 Figure 6.3 Examples of low, medium and high manipulations to individual

visual variables of individual visual marks. . . 59 Figure 6.4 An example of combining manipulations to individual visual

vari-ables to create new, compound manipulations. . . 61 Figure 6.5 Applying a zig-zag line and dotted line modifications to the pie

and cake charts to generate design alternatives. . . 61 Figure 7.1 The component composition chart, with confidence data cells

(11)

Figure 7.2 Component composition confidence indication using a linear scale which balances between green confidence and red uncertainty. . 67 Figure 7.3 Component composition confidence indication using a multi-state

iconographic black and white icon scale. . . 68 Figure 8.1 Early visual report design with color eventually became a black

and white final report design. . . 71 Figure 8.2 Design elements that display the service information in the report. 72 Figure 8.3 Photographic, iconographic, and textual design iterations for

dis-playing sample identification. . . 74 Figure 8.4 Highlighting fentanyl results throughout the visual report. Red

highlights the locations in the report that the indicator was placed. 76 Figure 8.5 Designs for presenting qualitative interpretations of the test

re-sults as a whole. . . 77 Figure 8.6 An example of the report disclaimer. The content will surely

change as the service evolves. . . 78 Figure 8.7 Final Report Design Page 1 . . . 79 Figure 8.8 Final Report Design Page 2 . . . 80 Figure 9.1 The complete research timeline with the deployment iteration

highlighted. . . 82 Figure 9.2 The software application visual report with sections containing

drug identifiers, fentanyl test results, component composition and percent composition charts. This is a work in progress. . . 84 Figure 9.3 The software application visual report with sections containing

qualitative interpretations, and service information. Hours and locations are still to be added. This is a work in progress . . . . 84 Figure 9.4 The visual report software when the chemical analyst is searching

for a clientID within the database. . . 85 Figure 9.5 The visual report software with preloaded data and areas the

chemical analyst must enter data into in order to finalize the report highlieghted in red. . . 86 Figure 9.6 The visual report with the three fields that must be manually

filled out partly complete. Icons indicate how these fields are to be filled out. . . 87

(12)

Figure 9.7 The visual report software with a finalized report ready to be saved to the database and printed onto paper for use in the harm

reduction conversation. . . 89

Figure 10.1This visual representation of the design study captures the rele-vance, rigour, and design cycles from Hevner [18], and presents the outcome as a technological rule from van Aken [52]. This presentation style was inspired by Storey et al.’s visual abstract [51]. . . 97

Figure 10.2An example of the desired overlap between design space dimen-sions. Note how changes to the nature of the boundary edge mark’s width cause changes in the area mark’s area. This over-lap is a positive overover-lap in this context as the boundary edge angle, and the segment area are both becoming ambiguous. The more the boundary edge mark’s width increases, the more the area is obscured. . . 100

Figure A.1 A diagram depicting the stakeholders and processes within the Substance drug checking service. . . 118

Figure B.1 The design feedback survey. . . 120

Figure B.2 The design feedback survey. . . 121

Figure B.3 The design feedback survey. . . 122

Figure B.4 The design feedback survey. . . 123

Figure B.5 The design feedback survey. . . 124

Figure B.6 The design feedback survey. . . 125

Figure B.7 The design feedback survey. . . 126

Figure B.8 The design feedback survey. . . 127

Figure B.9 The design feedback survey. . . 128

Figure B.10The design feedback survey. . . 129

Figure B.11The design feedback survey. . . 130

Figure B.12The design feedback survey. . . 131

Figure B.13The design feedback survey. . . 132

Figure B.14The design feedback survey. . . 133

Figure B.15The design feedback survey. . . 134

(13)

Figure B.17The design feedback survey. . . 136

Figure B.18The design feedback survey. . . 137

Figure B.19The design feedback survey. . . 138

Figure B.20The design feedback survey. . . 139

Figure B.21The design feedback survey. . . 140

Figure B.22The design feedback survey. . . 141

Figure B.23The design feedback survey. . . 142

Figure B.24The design feedback survey. . . 142

Figure B.25The design feedback survey. . . 143

Figure B.26The design feedback survey. . . 144

Figure B.27The design feedback survey. . . 145

Figure B.28The design feedback survey. . . 146

Figure B.29The design feedback survey. . . 147

Figure B.30The design feedback survey. . . 148

Figure B.31The design feedback survey. . . 149

Figure B.32The design feedback survey. . . 150

Figure B.33The design feedback survey. . . 151

Figure B.34The design feedback survey. . . 152

Figure B.35The design feedback survey. . . 153

Figure B.36The design feedback survey. . . 154

Figure B.37The design feedback survey. . . 155

Figure B.38The design feedback survey. . . 156

(14)

ACKNOWLEDGEMENTS I would like to thank:

All of the people who have supported this research and this writing process. Friends, family, and colleagues all helped me navigate this transformative learning pro-cess sucpro-cessfully, and you have each had an immeasurably positive impact on me and this work. Of particular importance are my supervisor Dr. Margaret-Anne Storey, committee members Dr. Charles Perin and Dr. Dennis Hore, and Cassandra Petrachenko. You lead me, personally, professionally, and scientifi-cally throughout this academic chapter, and helped bring this work to fruition. Thank you for your incredibly important contributions to all aspects of this work and this chapter in my life. To my friend Dr. Eirini Kalliamvakou, I would like to say thank you for reviewing and assisting me in all aspects of creating this thesis, but also for being so generous with your time and thoughtfulness in our conversations. To the members of the Computer Human Interaction and Soft-ware Engineering (CHISEL) Lab, you are an amazing team and a wonderful group of people to spend three years with. I care about each of you, and look forward to connecting wherever we are around the world.

Of critical importance to my success in this endeavour were the Bell ladies. Pamela, Jessica, and Cheryl, and especially my beautiful girlfriend Vanessa, you all supported me as I climbed the heights in the immediate ways that someone who is working hard cannot seem to remember to do. Thank you for caring for me so well and for being the buoy to my anchor; you are the best kind of people and deserve all the love in the world.

To Cam, Bet and Connor (collectively the Weatherstons) I give you credit for raising me to push myself, having numerous characteristics I aspire to embody, and for loving and supporting me at every step throughout my entire life. You are my counsel, my admired friends, and my cherished family. I selfishly wish for your long and happy lives, and for more amazing adventures together from here on out.

This was impossible to do alone, and amazing to do together. To all my friends and family, my colleagues and collaborators, thank you deeply and sincerely.

(15)

DEDICATION

(16)

Introduction

Recreational drugs (drugs) are not subject to manufacturing monitoring or regulated labelling. People who use drugs depend on the honesty and knowledge of drug deal-ers for access to safe drug supply. Drug overdoses causing injury and death can and do occur. For example, opioids, and particularly fentanyl and its potent chemical analogues, have been responsible for more than 10,300 deaths in Canada between January 2016 and September 20181. The same data-set indicates that between Jan-uary 2018 and September 2018, there were 3,286 opioid overdose deaths, of which 93% were accidental1.

Fentanyl is 100 times stronger than morphine2, and carfentanil is 10000 times

stronger than morphine3. Drug checking services are finding these opioids, and their

chemical analogs are adulterants throughout the illicit drug market. For example, heroin, marijuana, meth, cocaine, and even counterfeit prescription drugs have been found to contain fentanyl [14]. The unmonitored spread of powerful opioids through-out the drug supply is known in 2020 as the “global opioid crisis”.

1.1

Motivation

Drug checking services are one harm reduction response to the global opioid crisis. I collaborated with one such drug checking service called Substance4 to conduct the

research within this thesis. Substance is a drug checking service run by a team of

1https://infobase.phac-aspc.gc.ca/datalab/national-surveillance-opioid-mortality.html 2https://www.cdc.gov/drugoverdose/opioids/fentanyl.html

3https://pubchem.ncbi.nlm.nih.gov/compound/carfentanil 4substance.uvic.ca

(17)

chemists and social workers in collaboration with partner sites offering social services in Victoria, BC. They offer a free, onsite, drug checking service to any member of the public who wishes to know more about their drug’s contents.

Substance faces unique challenges and conflicting constraints in delivering its ser-vices. For example, this service is delivered onsite at multiple sites to serve the com-munities struck hardest by the opioid crisis. However, the service’s movement between partner sites means drug checking systems must be mobile, which contributes to some limitations on error reporting features the mobile chemical analysis machines possess and error reducing processes conducted by service staff. Larger machines intended for stationary use can employ hardware-dependent measurement checks and even con-duct cross-analyses to corroborate findings within a single system. Additionally, the service must return test results within 30 minutes. The short timeframe further re-duces the possibility of performing accurate error measurements on each drug sample. This example is just one trade-off faced by this mobile onsite drug checking service.

Substance utilizes five chemical analysis technologies to test drug samples. Each technology introduces uncertainty into test results in the form of measurement and procedural error. Each technology has different levels of sensitivity and accuracy. Each technology has a different chemical analysis process. Each technology has fallible human beings operating it.

As a result, instruments do not always agree with one another on the components they see. Disagreements between the outputs of different technologies are frequent. Some machines are entirely blind to components that others can see easily. Quanti-fying this complex error to the point of complete safety within short timeframes is not something that an onsite drug checking service can do. As a result, Substance can never guarantee the safety of a drug sample or provide a safe dosage of a drug sample. The decision of drug-consumption must always lie with the people who use drugs, also called clients of the service in this thesis.

Therefore, for clients to make informed drug use decisions, it is necessary to pro-vide them with actionable test results. Providing actionable test results involves describing the chemical analysis processes that generate test results. It also involves describing the shortcomings of the drug checking process and relevant forms of un-certainty in test results. A tool that captures all the necessary test results data and standardizes delivering drug checking test results is therefore necessary.

At Substance, harm-reduction workers present test results during a harm-reduction conversation. During this conversation, clients ask questions about drug use, drug

(18)

content, and receive harm reduction resources. This harm-reduction conversation is also where clients learn about uncertainty in the test results.

Ensuring clients understand test results can be challenging to achieve. Clients sometimes come into the service in altered mental states [31], sometimes carrying multiple samples, and some on behalf of others whom they deliver results to later. Substance staff have indicated that clients struggle to keep harm-reduction conversa-tion outcomes matched with test results, and test results matched with physical drug samples due to these complicated circumstances.

Substance described situations where misunderstandings of test results could cause harm. A client who could receive a correct test result of 91% confidence that their drug sample contained caffeine could go on to confront and change their drug dealer. The client could do this because they misunderstood the result incorrectly as the drug sample consisting of 91% caffeine, indicating a low quality purchase. A misun-derstanding like this could have a negative impact on the client.

Thus, the Substance team has called for the creation of a visual drug checking report, which hopes to solve some of these problems. This report would ideally visually present test results, highlight uncertainty and dangerous components, and facilitate safer drug use decisions and better harm reduction conversations.

Including uncertainty in test results is likely to make the data and visualizations more complex, and visualization researchers face non-trivial barriers to including uncertainty within representations [20]. However, Roth et al.[44] indicate that it is the visualization researcher’s responsibility to reveal uncertainty to end-users of visualizations so they can make informed decisions, even when end-users are the general public. Recent work by Correll et al.[11] has indicated that as visualization researchers, we have the ethical responsibility to reveal hidden uncertainty in our visualizations.

I joined Substance as a visualization researcher to create a visual test result report (visual report) that captures test result data, its uncertainty, and qualitative inter-pretations. The rest of this thesis describes a design study I conducted to create such a visual report for the onsite Substance drug checking service during the opioid crisis.

1.2

Contributions

The following are the primary contributions made by this research.

(19)

study in the unexplored and important drug checking visualization research domain. This design study allows me to characterize the problem space, identify critical design goals, and gather and analyze stakeholder feedback. Through the design study, I generate a final design and I implement and deploy a solution to the problem context. I situate the design study as interdisciplinary research relying on and contributing to both the drug checking literature and visualization design literature.

Contribution 2: Design Space A new design space comprises my second con-tribution. It is a design space for visualizing uncertainty for percent composition test results. I apply this design space to generate design alternatives within the drug checking visualization problem domain. I analyze the design space in terms of consistency and completeness and discuss its effectiveness.

Contribution 3: Visual Report My third contribution is the visual report itself. This contribution primarily benefits the drug checking research domain. The visual report design and the visual report artifact are both included in this contri-bution. I discuss the transferability and generalizability of the visual report within the drug checking domain, and also discuss the effectiveness of the visual report once deployed within a real drug checking context.

1.3

Thesis Layout

This thesis is structured as follows.

1.3.1

Part One

Chapter 2 Methodology; introduces the design science paradigm, outlines the design study structure I adopted, and additional methodologies I used within design processes. This chapter also introduces the design science concepts necessary to contextualize the rest of the thesis.

Chapter 3 Drug Checking Service Context; describes contextual information needed to perform a design study in this visualization application domain. This chapter includes descriptions of stakeholders, the application domain and problem characterization. It also includes drug checking test results and forms of uncertainty and confidence.

Chapter 4 Requirements Analysis; describes stakeholder input and the feed-back gathering processes I used. I present nine design goals distilled from the

(20)

require-ments and acceptance criteria gathered from stakeholder feedback.

1.3.2

Part Two

Chapter 5 Design Goal One: Visualizing Percent Composition and Com-ponent Composition; describes how I satisfied design goal one; that of visualizing percent composition and component composition with charts. I select baseline charts and design them to present the percent composition and component composition data formats.

Chapter 6 Design Goal Two: Visualizing Uncertainty in Percent Com-position; describes how I satisfied design goal two; that of visually representing uncertainty in percent composition test results. This includes the identification of a new design space I used in generating design alternatives which present uncertainty in percent compositions.

Chapter 7 Design Goal Three: Visualizing Confidence in Component Composition; describes how I satisfied design goal three; that of representing confi-dence in component composition test results. This includes the creation of an icono-graphic scale to indicate confidence in component composition.

Chapter 8 Additional Report Design Goals; describes how the visual report satisfied the remaining six design goals. I describe how these design goals are satisfied, and I explain transitions between design alternatives using stakeholder feedback and requirements.

Chapter 9 Implementation; lists the technologies used to implement the report and describes the choice of each technology. I describe how the report visualization system fits into the drug checking context.

1.3.3

Part Three

Chapter 10 Discussion; discusses the resulting report in terms of meeting design goals, and the primary trade-offs present in the final design. Also included is a qual-itative analysis I conducted of the newly identified design spaces described in terms of consistency and completeness. I also discuss generalizability and transferability of contributions within and outside the drug checking research field.

Chapter 11 Future Work and Conclusion; describes the plans for conducting a user study given the unique restrictions on stakeholder access. I also describe areas of research that may benefit from this work and conclude the thesis.

(21)

Chapter 2

Methodology

In this chapter I present the overarching design science research paradigm this research falls within. I also describe in detail the design study methodology that I used to structure the research as well as additional methodologies that I used within the design study.

2.1

The Design Science Paradigm

Design science is a paradigm for generating knowledge about designing “a better world” [52, p.4] Additions to the body of design science knowledge come in the form of design study contributions. Each design study’s goal is to solve a real design problem with one or more specific design solutions and generate design knowledge. These design problems and solutions must be characterized and validated using accepted design science techniques. With the quality of the pairing of design solutions with design problems verified, it is then possible to gain insight into solving a class of design problems. According to van Aken, the general design insights in combination with the problem-solution pairing form a design science knowledge contribution [52]. The quality of design science contributions are dependent on the quality of the resulting solution design, design methodologies used, and evaluations performed. van Aken [52] notes that the relevance of a solution to its problem is directly related to the quality of the designer and design process inputs. The rigour of design science research is similarly proportional to the quality of the evaluation of the solution in its context. Design science researchers can enhance the rigour and relevance of their research through triangulation between information sources, controlled observations,

(22)

and cross-case analyses [52].

Design science research can be complicated because it typically solves design prob-lems within socio-technical contexts. Socio-technical contexts are arrangements of in-formation technology, people, and related processes [36]. van Aken [52] describes how solving design problems within socio-technical contexts is challenging partly due to the uniqueness and unpredictability of human agency. Additionally, designing within socio-technical systems inherently involves political and ethical design considerations. Human agency, politics, and ethics are all factors that impact design outcomes during a design study.

2.2

Information Location and Task Clarity in

De-sign Studies

This research is a design study focused on designing a visual report for the Substance drug checking service during the opioid crisis. In Sedlmair et al.’s[47] touchstone research on conducting design science within the domain of information visualization, the authors indicate that the primary objective of design studies is to solve open-ended visual design problems. Sedlmair et al., state that design studies are problem-driven research projects wherein design researchers create solutions for real people and real problems with creatively derived design solutions. According to Sedlmair et al., the design study process is flexible and iterative to identify design objectives, generate design alternatives, and improve problem-solution fit. This flexibility enables design studies to collect emergent requirements and generate designs that suit contextual nuances.

Sedlmair et al.[47] describe in their work the type of visualization research which design studies are best suited for by defining two descriptive axes as shown in Figure 2.1. The first axis is an information location axis with a range from inside the expert’s head to inside the computer. The second axis is a task clarity axis with a range from wholly described to completely undefined. Research projects fall into this two-dimensional problem space based on the location of information and whether critical tasks are fuzzy or crisp in their definition.

In this design study I have access to test results within a database, however, my stakeholders within the socio-technical context posses critical knowledge about drug checking test results. This locates information within the desireable middle area of

(23)

Figure 2.1: Sedlmair et al.’s[47] diagram depicting information location and task clarity. Design projects can be placed within these axes to understand whether or not a project could be conducted using a design study.

the information location axis.

As for task clarity, I have relatively clear descriptions of the tasks which a solutions generated by the design study are expected to support. However, as within most socio-technical settings, emergent behaviour and situations must be adapted to by stakeholders to ensure client safety which making drug use decisions. This research is thus within the desirable middle area within the task clarity axis.

(24)

2.3

The Relevance, Rigour, and Visual Cycles

I also rely upon Hevner’s [18] three design science cycles to describe how the ac-tions taken during the design science research relate to accepted processes within the design science literature. Hevner describes three cycles that connect three con-texts to describe the design science paradigm. The three cycles are relevance, design and rigour, and the three contexts are the problem environment, the design science research process, and the design science knowledge base.

The relevance cycle connects the environment containing the problem to be solved with the design science research process. The relevance cycle does this in two ways; one, through the elicitation of requirements to guide design and acceptance criteria to guide evaluation, and two, through field testing of design alternatives in the problem context using acceptance criteria. In turn, this cycle generates more requirements and acceptance criteria and the cycle repeats.

The design cycle connects the problem context requirements and grounding design science literature as the necessary inputs to creatively generate designs. It also con-nects the problem context acceptance criteria and evaluation processes from literature as the inputs to evaluate the generated designs.

The rigour cycle connects the design science research process with the existing design science knowledge base in literature by grounding design and evaluation in literature, and by contributing back new knowledge to be added to the design science knowledge base. This thesis refers to these three design cycles to connect research actions to design science processes.

Hevner’s rigour, relevance, and design cycles capture the connections between knowledge, design and context as processes performed during design science. This literature contextualizes the design science research presented within this thesis and also helped structure my iterative research process.

2.4

Presenting Contributions using Technological

Rules, Novelty and a Visual Abstract

In addition to Hevner’s three cycle design science paradigm, van Aken’s concept of technological rules [52] and Storey et al.’s design science visual abstract [51] help guide the presentation of the contributions of this design study.

(25)

Van Aken’s concept of technological rules captures general knowledge about the mappings between design problems and proposed design solutions. Technological rules capture how proposed interventions may create desired effects in a given context. Technological rules describe a specific solution for a specific problem, but they also expose more general classes of problem-solution pairings. These general classes of pairings enable the transfer of design knowledge from one specific context into another to solve a similar problem. The visual report contribution is a potential example of an intervention and is discussed later within the thesis.

Storey et al.[51] bring Hevner’s and van Aken’s work together through the creation of a visual abstract for presenting design science. Storey et al., combine Hevner’s three cycles and van Aken’s technological rules concepts to create a visual abstract which assists design scientists in presenting and reflecting on design science contributions. The authors additionally signal that novelty is a critical aspect of contributions to design science. Storey et al.’s visual abstract concept has places to capture all of these aspects of design science contributions, and I rely on the visual abstract later in this thesis to present the contributions of this research effectively.

In this section I integrate design efforts with the rigour, relevance and design cycles described by Hevner [18]. I explain the design study processes used to create design proposals for sections of the visual report.

Additional methodologies which I used as part of my design study include the Five Design Sheet methodology described by Roberts [43] in combination with a design space exploration process outlined by Shultz et al.[46]. These were useful when generating designs and employing user feedback gathering processes to evaluate the effectiveness of design candidates with stakeholders.

By adopting formal procedures into an iterative design process, I strengthen the connection to the rigour, relevance and design cycles.

2.5

Design Study Structure

It was vital to follow a structured process while exploring design ideas during design cycles, and I describe these processes here.

The first process I used is requirements analysis, where I take the knowledge I gain about my problem space and consolidate it with the existing requirements and problem knowledge to generate design goals.

(26)

related literature as inputs to a visual design process.

The third process is stakeholder feedback, where I take newly generated design artifacts and assess their effectiveness using design feedback processes.

These three processes I followed together form a design iteration.

Emergent requirements gathering and visual-design-fit-to-context are strong mo-tivations to choose the design study methodology as described by design study re-searchers [47, 55]. By using this methodology, I bring in stakeholder perspectives and responses to design alternatives early and throughout the design process.

I performed multiple design iterations to generate and refine designs without putting vulnerable stakeholder groups at risk. This risk to stakeholders originates from the combination of two design considerations; the inherently early and unre-fined concepts that arise early in design study research; and, the critical safety nature of the information and decision-making surrounding drug checking test results. I per-formed three design iterations before satisfying the acceptance criteria of my stake-holder groups. The stakestake-holder groups include people who use the service (clients), chemists performing drug checking (chemical analysts) and social workers providing harm reduction resources (harm-reduction workers).

After the three design iterations were complete, I began implementing and de-ploying the design as a usable artifact during a deployment iteration. During the deployment iteration I first polished the design after gathering final design feedback. I then moved into a making stage and a deployment stage. The making stage in-volved choosing technologies and implementing the drug checking report as a software artifact. The deployment stage involved packaging the report and installing it into the laptops within the service for use.

Following this process led to significant changes to the report’s features in inter-mediate design iterations, to the acceptance criteria and requirements, and led to adaptations to the methodologies used within design iterations.

It has not yet been possible for me to perform a direct user study with people who use drugs due to the existing anonymity and privacy considerations of the partner sites. Instead, I have planned a user study for future work, which collects feedback indirectly from other groups of stakeholders. A full timeline of the research process is shown in Figure 2.2.

This graphical research timeline presents the temporal nature of the research while avoiding an exhaustive literal description. To complement the written description, the reader can get a sense of the design activities executed within the design iteration

(27)

Figure 2.2: The complete research timeline composed of design and deployment iter-ations, and five design activity stage types.

stages in Table 2.1. Table 2.1 shows the three design iterations broken down into the three stages within design iterations, as well as the deployment iteration and its make and deploy stage. Inside each cell I briefly describe the activities that I performed during that stage.

With the design study methodology forming the overarching structure of the re-search within this thesis described, I next describe two additional methodologies used within the three design iterations to generate design alternatives.

2.6

Methodologies used Within Design Iterations

I used the five design sheet methodology described by Roberts et al. [43] and a design space exploration process described by Schulz et al. [46] within my design iterations. These formalized design processes help bring knowledge from the design science body of knowledge into the process of designing the report intervention.

2.6.1

Five Design Sheet Methodology

Roberts et al.[43] conceptualize a hand-drawn process of divergent exploration and convergent solution-finding for visual designs using a set of five sheets. They note that unstructured hand-drawn brainstorming efforts help generate early prototype designs. They rethink that messy process as a structured prescriptive process encap-sulated and supported by five specially designed sheets. Robert et al.’s premise is that by constraining the early exploration of hand-drawn visual alternatives within a well thought out process, there should be less forgotten good ideas, better feedback gathering, and arrival at better solutions sooner.

(28)

Design Iteration Requirements Analysis Stage Visual Design Stage Stakeholder Feedback Stage #1 (January - February 2019) Introductory meetings and

gathering high-level goals. Semi-structured interviews with stakeholders to collect initial stakeholder require-ments

Visualization lit-erature search for existing off-the-shelf report builder solu-tions. Report builder visual design project.

Stakeholder feedback meet-ing on report builder first concept.

#2 (March - May 2019) Analysis of feedback from stakeholder feedback meet-ing. Consolidated feed-back with existing require-ments to generate an up-dated list of requirements. Confirmatory meetings with stakeholders to verify re-quirements and design direc-tion. Drug checking litera-ture search to find examples of test results data-oriented reports.

Visualization design literature searches to support design choices. Data-oriented report visual design process and design space identifi-cation.

Design feedback survey de-sign and creation. Deploy-ment of design feedback sur-vey to stakeholders. Please reference Appendix B for the complete survey tool.

#3 (June - July 2019) Analysis of design feedback survey results. Consolidated feedback with existing re-quirements to generate an updated list of requirements. Identification of unresolved and/or remaining design de-cisions.

Visualization design literature searches to assist in final design decisions. A final set of alternative report designs generated.

Stakeholder feedback meet-ings to resolve the final de-sign choices. Final design complete.

Deployment Iteration Make Stage Deploy Stage

(August - December 2019) Selection of technologies to implement a drug checking report. Implementation of design in those technologies. Testing data formatting and data access.

Installation of applica-tion onto drug check-ing service machines completed.

(29)

The five sheets they propose are a task definition sheet, three principle design sheets, and a final solution sheet to support the visualization researcher formalize their design exploration process. This methodology suited my specific problem context due to the complexity and sensitivity of the research domain.

The five design sheet methodology was applied primarily during the second and third design iterations for creating visual designs in the visual design stages. These five sheets are available on Robert et al.’s website1, and I follow the processes they

describe.

I firstly characterize the tasks the visualizations needed to satisfy on the first sheet. Then, design ideas were explored and generated on the second to fourth sheets. Finally, these designs were filtered down to a subset of solutions on the fifth.

Using this methodology, I generated visualization ideas during the visual design stages of design iterations. Design ideas were digitized and presented to stakeholders during stakeholder feedback stages. The five design sheet methodology helped my stakeholders understand that design concepts were not finalized implementations, even when digitized in presentations to the team.

2.6.2

Design Space Exploration Process

Shulz et al.[46] describe how visual design spaces rely upon the conceptual decom-position of a design whole into its essential parts. These parts and their modifiable attributes form the n dimensions of a design space. The researcher can then explore this n-dimensional design space by making modifications to positions on dimensions of the design space. Thus, every location in a design space represents an n-tuple of design choices for a given design concept.

With a design space outlined, it is possible to select and modify sub-portions of the design to explore the effects of dimensional changes on the design’s properties. With modifications to some dimensions made, all of the individual parts are reconstituted into a new whole to generate a complete design alternative. By systematically explor-ing design spaces, Schulz et al., describe how it is possible to discover non-intuitive design solutions that may never have occurred to visualization researchers.

In this thesis, I formalize a unique design space based on carefully selected charts and the design goals to be accomplished. I decompose the selected charts into their parts, and systematically explore the design space to generate alternative visual

(30)

sentations of test result data in alignment with design goals. I then use these al-ternatives in subsequent stakeholder feedback stages within the design iterations to generate new requirements and acceptance criteria.

In summary, by following a methodological design cycle, the rigour and relevance of the design study are enhanced.

(31)

Chapter 3

Drug Checking Service Context

Many contextual factors impacted the design, process, and outcomes of my research. I present an overview of the most relevant contextual factors in this chapter as part of my efforts to ensure the relevance of the solution to the problem. Much of this context was gathered during relevance cycles and helped define requirements and acceptance criteria for design proposals. I verified this contextual information with the drug checking literature to support the rigour cycle. These efforts are in alignment with Hevner’s [18] relevance and rigour cycles and help generate the ‘thick’ descriptions, which van Aken [52] requires from high-quality design science.

3.1

Stakeholder Types

The context of my study has three primary stakeholders; clients, harm reduction workers, and chemical analysts. Each group has individual and shared requirements and goals and each is affected by systemic forces and possess different perspectives.

The visual report design must find a balance of the requirements and goals of these stakeholders. I include a service flow diagram in Figure 3.1 to show the stakeholders, their roles and primary concerns are described.

I collected this stakeholder information through extensive interactions with service staff over an extended period, and in the case of clients, also from descriptions of client demographics in the related literature.

(32)
(33)

3.1.1

Clients

Clients are members of the public who wish to have drug samples tested by the drug checking service. They can bring in multiple samples and bring in samples on behalf of others.

Due to privacy and anonymity concerns I was never able to interview or survey client stakeholders (clients) directly. This lack of direct access was an essential factor in subsequent research decisions, results, limitations, and future work. I collected these client stakeholder descriptors from the other two stakeholder types and litera-ture.

Clients can belong to any demographic. However, some demographics outweigh others in the population of people who utilize the service. These can be people who are homeless, people who are regular users of drugs, and who use consumption sites. As the name suggests, consumption sites are locations that drug users can consume drugs in supervised settings. These facilities were created to reduce harm within the drug user community. These harm reduction goals include reducing needle sharing, which transmits diseases, reducing injuries and deaths due to overdoses, and providing access to related harm reduction resources.

Substance is a drug checking service offered as part of a community harm reduction resource. Substance has partnered with sites in Victoria specifically to target under-served demographics who may not have access to any other drug checking services. These partners include Aids Vancouver Island1 and Solid Outreach2.

From the drug checking literature, Liang et al.[31] note that during the two decades of drug checking services existing users have typically been considered ‘party drug’ users. The authors note that in recent years, drug checking has extended into popula-tions with a higher ratio of females, higher socio-economic status, and also marginal-ized populations. As drug checking services begin to serve marginalmarginal-ized populations, such as injection drug users, new challenges in offering effective services are likely to emerge. An example is described in Karamouzian et al.’s work [22]; after receiving positive fentanyl test results at safe injection facilities in Vancouver, BC, clients did not discard their drugs but reduced dosages instead. Van Aken notes that dynamic human behaviour like this is a typical challenge of socio-technical design science prob-lem contexts, and this a great example where human agency must be accounted for. Though the service is open to any members of the public, many clients come from

1http://avi.org/our-services/victoria 2https://solidvictoria.org/outreach/

(34)

the homeless demographic and do not have regular exposure to primary visualization usage or chart reading situations. Clients also may not have access to the internet, may not have followed normative education paths, or may not have much interest in learning how the service generates drug checking data.

The clients who access Substance are concerned with maintaining their anonymity and privacy. Stigma and criminalization motivate client concerns surrounding posses-sion and use. Non-client stakeholders indicated that clients must be protected from further stigmatization and criminalization when they access Substance. Therefore, the service must remain neutral in the value judgments that it is perceived to make through its messaging. For example, Substance should never directly or indirectly say that using drugs is wrong or judge clients for the drug use decisions they make.

Another client concern is the speed of service. According to the research partners, a large portion of the clients will leave the service if the results take longer than 30 minutes. This time-limit is borne out in drug checking literature as well. In a global review of drug checking services, Barrat et al. [3] collected data on service turnaround times and drug checking service format. Formats include mail-in, fixed-site, and on-site drug checking services. Turnaround times are typically less than 30 minutes, with only a few taking more than an hour for the on-site drug checking services surveyed.

3.1.2

Harm Reduction Workers

Harm reduction workers are immersed in the client context and are the first service staff clients encounter within the drug checking service. Harm reduction workers are service staff and researchers who have backgrounds in social work with vulnerable populations. They may have been formally educated in social work, or have gained their experience within social work settings. These experiences give them a unique and intimate perspective on the challenges that clients face, and they are primarily concerned with the welfare of clients.

Harm reduction workers are responsible for running demographic surveys, col-lecting samples from clients, and engaging the client with harm reduction resources during the chemical analysis. Harm reduction workers also participate in helping disseminate drug checking test results to the client. Dissemination occurs during the harm reduction conversation concerning the drug checking test results. In this conver-sation, harm reduction workers help translate the results into useful harm reduction actions the client could take. The harm reduction conversation or intervention is a

(35)

common feature of drug checking services that return results to clients [4].

Within Substance, harm reduction workers face the challenge of delivering a critical service to a disenfranchised and extremely vulnerable population. Stigmatization, criminalization, and general societal segregation put the drug-using demographic at a disadvantage when it comes to accessing critical services. For people who use drugs, drug checking services can be their only way of gaining safety-critical information about the drugs they consume.

Harm reduction workers are also very interested in highlighting uncertainty within the drug checking test results for clients. According to the harm reduction workers, clients interpret test results originating from an official organization like Substance as being factual and accurate.

Harm reduction workers want clients to understand that the results contain un-certainty and want the report to represent that unun-certainty. Beyond this, harm reduction workers indicate that the test results report must present data intuitively and ethically.

Additionally, having a test result report is useful in helping harm reduction workers prevent accidental overdose. A report would provide drug content information and qualitative interpretations and depictions of uncertainty in the test results, and do so in a consistent and reproducable format. Additionally, a report could provide links to drug safety resources and service information.

Having a report also enables tracking improvements of messaging outcomes as measured by changes in drug-usage behaviour to adulterant-positive test results with a test results artifact. Changing drug use decisions in response to adulterant-positive test results is discussed by Kennedy et al.’s [24] work on the willingness to use drug checking services in supervised injection sites. The authors note the mixed results in changes to behaviour in response to test results, even when test results indicate potentially harmful drug contents. Being able to track changes to a test result artifact that conveys test results could improve the outcomes of drug checking services and drug use decision making by clients.

As shown in Fig 3.1, once the harm reduction worker has collected and documented each drug sample from the client, the samples are passed on for chemical analysis.

(36)

3.1.3

Chemical Analysts

Chemical analyst stakeholders possess the training to operate and understand the specific chemical analysis systems used within the service. As shown in Figure 3.1, chemical analyst stakeholders are responsible for running chemical analyses on the drug samples to produce test results. Beyond understanding how to perform each chemical analysis, chemical analysts must have an understanding of the theoretical effectiveness of each chemical analysis process and be able to interpret the results for client stakeholders.

Chemical analysts are primarily concerned with operating their systems effec-tively and producing actionable and accurate test results. The chemical analysts on Substance have years of drug checking experience between them and have delivered hundreds to thousands of test results.

Chemical analysts face the challenge of interpreting test results produced by im-perfect chemical analysis systems on behalf of clients. The chemical analysis processes used do not always find the same drug components or produce consistent results. Factors that impact test results include the accuracy of system measurements, the sensitivity of analysis processes to procedural mistakes, and the pairing of analy-sis techniques to drug sample types. Chemical analysts also interpret and resolve discrepancies in test results.

Once test results are summarized, chemical analysts present test results in the harm reduction conversation with the harm reduction worker and client. They also describe their qualitative interpretations and answer any questions that arise as a result of the test result data. A specific request from the chemical analyst stakeholders to reduce these challenges was a visual test result report as that would allow them to present complex test result information in repeatable and straightforward ways.

There are also actions that service stakeholders perform outside their chemical analyst and harm reduction worker roles. Service stakeholders may use their experi-ence in drug checking to help clients understand the harms that different drugs may have or describe harm-reducing actions that could be taken even when those actions are outside their primary role.

The above stakeholder data describe dominant perspectives, qualitative accep-tance criteria, and design requirements in the Subsaccep-tance context. These stakeholder descriptions are a critical part of Hevner’s relevance and rigour cycles. I increase the relevance of designed solutions to the problem context and increase the rigour of this

(37)

research by collecting these descriptors over time and from distinct sources.

3.2

Drug Testing Systems and Test Result

For-mats

In this section, I present the drug testing technologies and test result formats and describe sources of uncertainty within the test results.

Drug checking test results fall in three data categories: component composition, percent composition, and qualitative interpretation.

A chemical composition breakdown of a drug sample generates a component com-position, which is similar to the ingredients in a recipe without the quantities of each ingredient included. Mobile drug checking services have been providing crude com-ponent composition test results for decades through reagent testing. Reagent testing mixes reagents with samples to produce indicator colours and is something a drug checking service can do anywhere on a small budget. Component composition test results provide critical information that people who use drugs need to make decisions about recreational drug use. For example, if cocaine or MDMA (up, uppers) tests for unwanted components, a client can discard them and buy from someone different without much cost to themselves.

However, for those who use heroin or methamphetamine (down, downers), the stakes are higher. A dominant portion of opioid samples tests positive for the pres-ence of fentanyl. Fentanyl is added to, or fully replaces down in samples because fentanyl greatly enhances the desired narcotic effects of those drugs. Down enhanced with fentanyl is becoming cheaper and commonplace, but fentanyl has also begun to spread throughout drug supplies. According to my stakeholders, simply identify-ing component composition is now insufficient for safe drug use decisions. Instead, identifying and quantifying the amount of fentanyl and other chemical components as percents of composition in drug samples has become critical in making safe drug use decisions for opioids.

Most onsite drug checking services do not offer percent composition test results [3]. Until recently, the necessary technology to produce percent composition test results has been physically large, expensive and complex to operate. The miniaturization of technology capable of producing percent composition results for onsite drug checking services has only become available in the past couple of years. This technological

(38)

advancement represents a considerable step forward for drug checking operations. However, even with percent compositions generated, it is crucial for chemical analysts to qualitatively interpret test results and present a summary for clients of the service.

3.2.1

Component and Percent Composition Test Results

Substance utilizes five drug checking systems. Each system utilizes a different method for determining drug composition, with test results produced by these methods falling into two categories, as seen in Table 3.1. The general test result data formats ab-stracted out of the system outputs are component composition results and percent composition results.

Chemical analysis method Test result category Fentanyl test strips Component Composition Gas Chromatography-Mass Spectrometry (GC-MS) Component Composition Raman Spectroscopy (Raman) Component Composition Surface-Enhanced Raman Spectroscopy (SERS) Component Composition Fourier-Transform Infrared Spectroscopy (FTIR) Percent Composition Table 3.1: Chemical analysis methods and type of drug checking test results produced.

Component Composition

Machines that output component composition lists indicate the “what” of composi-tion. Each machine outputs a list of ingredients, and also includes a hit-score for each ingredient. A hit-score is a numerical or symbolic representation of the confidence of identification for each component. Fentanyl test strips test only for a single compo-nent and are either positive or negative, with almost no limit to sensitivity. Table 3.2 shows examples of the output data for each system.

Percent Composition

Percent composition machines decompose the ratios of ingredients within a drug sample. They answer the “how much” for each component they identify in drug samples. Percent composition results are lists of components with a percentage of the sample’s composition attributed to each component in the list. Only FTIR is capable of producing these results at this time. Table 3.3 shows an example FTIR output for a drug sample.

(39)

Machine Single Component Representation Hit-Score

Fentanyl Test Strip Fentanyl positive/negative

GC-MS Component name 0-1000/1000

Raman Component name 0-100/100

SERS Component name green/orange/red

FTIR Component Name 0-1000/1000

Table 3.2: A depiction of components and their representative hit-scores. Hit-scores go from low to high confidence. Ratios of 1 represent a complete identification. The SERS scale is qualitative, and based on an unexposed internal set of thresholds; chemical analysts only get to see the colours.

FTIR Component List Percentage

Heroin 45%

Caffeine 27%

Sugar (mannitol) 20%

Fentanyl 8%

Table 3.3: This is an example of an FTIR test output. It shows a list of components and the percent of the sample’s composition they each comprise.

If clients know the ratios of components that make a drug sample, then clients can estimate the strength of a drug sample, its potential effect on them, and make harm reducing choices that are not possible with only component composition information. Because FTIR also includes a version of component composition information, this makes percent compositions the most valuable single test result within the service.

3.3

Uncertainty in the Drug Checking Service

Substance faces several uncertainties in delivering its services. Some uncertainties pertain to the systems used to generate test results, and other uncertainties arise due to the nature of offering a drug checking service to members of the public. Some uncertainty sources are from the drug checking systems, and some are from the nature of the service. Test result data contains some of the following uncertainties, the definitions of which I collected and verified with chemical analyst stakeholders during design meetings in early design meetings.

Drug checking system uncertainties are:

• Error: Every system has a degree of error in its results. This type of error usually is mathematically described for each system and presented as bands of

(40)

error in test results. None of the drug checking systems have error bands, but all systems have indications of confidence of identification, as shown in Table 3.2.

• Identifiable components: The systems differ in the sets of components that they can identify. For example, GC-MS is unable to ever see sugars due to their combustion during the analysis process.

• Limit of sensitivity: All systems are limited to identifying components above a minimum concentration threshold. For example, FTIR cannot accurately produce results for components below 2% concentration.

• Drug homogeneity: Drug samples can be poorly mixed, resulting in mixture inhomogeneity. Inhomogeneity means that different parts of drug samples will produce different analysis results. Additionally, drug checking systems used by Substance are intentionally non-destructive.

• Drug analysis processes: Each chemical analysis system has a standard operating procedure that is followed to generate test results. These standard operating procedures vary in how much the chemical analyst is required to make decisions that determine the chemical profile produced.

• Inter-system discrepancies: Due to each system possessing a different un-certainty profile, there are always inter-machine discrepancies. One technique may produce a component composition list that is missing components from another technique’s result.

The FTIR technique which produces percent composition test results, requires the most decisions from chemical analysts as it uses a manual subtractive signal analysis process. In this analysis process, the machine scans the sample, which produces a total sample signal. The system then tries to match pure compo-nent signals from a library of pure compocompo-nent signals to portions of the sample signal based on fitting the signal lines together. Once the chemical analysts find the best fit for a component in the library, they subtract that signal out of the sample signal. Subtracting out the signal leaves behind a residual signal. Chemical analysts continue this process of fitting and subtracting pure compo-nents out of the sample signal until the residual signal is just noise. The order of subtraction of signals is critical and can produce different sets of components

(41)

being matched and reported as drug sample contents. This non-determinism, due to the drug analysis process, can have a dramatic effect on the components identified within a drug sample.

There are uncertainties related to the nature of offering any drug checking service. These uncertainties are due to service performance factors or practical reasons.

Service uncertainties are:

• Multi-sample clients: Clients to the service regularly bring multiple samples into the service. The pairing between physical samples and test results can be mixed up in the current handwritten sticky-note and verbal method of results delivery. This process reduces Substance’s ability to track its impact.

• For-a-friend clients: The service also serves clients who bring drug samples into the service on behalf of others. Clients convey test results to these third parties outside the service. Confusion pairing physical samples with test results compounds with a delayed out-of-service recounting of harm reduction advice to third parties.

• Client psychological state: There is the possibility that clients are entering the service with incapacitated faculties. Imparting harm-reduction information effectively to clients may be difficult in these situations.

• Service staff knowledge and experience: Service staff in both the chemical analysis and harm reduction roles have a wide variety of experience. Therefore, clients may have a considerably different harm reduction conversation depending on the service staff present.

• Client stay duration: Clients would like to receive their drug checking results as quickly as possible. A turnaround time of 30 minutes from client entry of service to client exit with results for all drug samples is the maximum acceptable time frame. After this amount of time, clients may leave the service with a subset of test results or no results at all.

• Public facing service: The drug checking service aims to serve any member of the public. The service must be designed to accommodate the full spectrum of experience with visualizations and chemical analysis data. Keeping report designs intuitive and informative is critical.

(42)

The needs of the stakeholders, the degree of uncertainty in test results, and the challenges of offering an on-site drug checking service all help motivate this design study to generate an effective visual report. The desired outcome with a visual report would be to mitigate the negative effects of some of these uncertainties and facilitate some of the positive outcomes the service hopes to create. I searched the literature to see if an existing report solution is suitable for Substance’s context and describe those next.

3.4

Existing Test Result Delivery Methods

The drug-checking body of literature is extensive and detailed. Nevertheless, there was only one report that I could find that mentions results delivery mechanisms directly; a comprehensive global review of drug checking services conducted by the National Drug and Alcohol Research Centre in Sydney, Australia [3,4] in 2017. In this report, and accompanying bulletin, the authors summarize the service architectures of drug checking services around the world, including the results delivery methods. Results communication occurs in different ways to different stakeholders across the 31 services surveyed in Barret’s article [4]. Stakeholders in this report include law-enforcement bodies, medical organizations, NGOs and clients. Of particular interest in this survey are descriptions of how drug checking services deliver drug checking results to clients directly. Table 3.4 presents the methods from services that do so.

Communication Method Number of services

App 1

Text message 2

Aggregate report 4

Website (with code) 4

Website (public) 6

Email 10

Phone call 11

Table 3.4: In-person communication of results directly to clients across 31 services. Note that some services deliver results in multiple ways. [4]

I visited each website in the global review survey. Public websites with open access databases had reports accessible without requesting examples from the services. I reviewed these websites to see how reports presented results.

Referenties

GERELATEERDE DOCUMENTEN

preclude a plaintiif from establishmg that a reasonable the Northwestern Umversity School of Law durmg the ALI alternative design should have been adopted thdt would have sessions

This is a test of the numberedblock style packcage, which is specially de- signed to produce sequentially numbered BLOCKS of code (note the individual code lines are not numbered,

Differences can be found in the methods used to teach students how to design; in department A, a model resembling the regulative cycle is used, in department B

The national focal point is part of the Drug Monitoring and Policy Department of the Trimbos Institute, the national research institute for mental health care, addiction care

The following requirements must be met for this validation: (1) The artifact requires clear representation on all preconditions of the design and implementation of an RPA; (2)

The newly designed construct was used to design a test and this prototype was then administered to a small cohort of 179 grade 3 and 4 learners (9 and 10 years old). The

Furthermore, if philosophical counselling is inter alia about the unmasking of skewed perceptions and inappropriate conceptualisation of the transcendent realm of life

de proefopstelling moet op een profiel een zuiver wringend moment aangebracht worden.Tevens moet het profiel in staat zijn aan de uiteinden vrij te welven.Het aangebrachte moment