University of Groningen
Does it work?
Van't Hul, Manon; van der Sluis, Ielka
IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to cite from it. Please check the document version below.
Document Version
Final author's version (accepted by publisher, after peer review)
Publication date: 2017
Link to publication in University of Groningen/UMCG research database
Citation for published version (APA):
Van't Hul, M., & van der Sluis, I. (2017). Does it work? Applying formative evaluation methods to test the usability of the PAT Workbench. Poster session presented at TABU Dag, Groningen, Netherlands.
Copyright
Other than for strictly personal use, it is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license (like Creative Commons).
Take-down policy
If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.
Downloaded from the University of Groningen/UMCG research database (Pure): http://www.rug.nl/research/portal. For technical reasons the number of authors shown on this cover page is limited to 10 maximum.
Research question
How do different usability evaluation methods, focussed on experts and users, contribute to the evaluation of a system during an iterative design process?
The PAT Workbench was used as a case to test usability methods. This is a tool that was developed at the
de-partment of Communication and Information Sciences of the University of Groningen, to store, annotate, retrieve and view multimodal instructions (MIs).
MIs are instructions that consists of text and
pictures. The goal of the PAT Workbench is to create a corpus of annotated MIs for further research.
The framework used for this process was the design re-search cylce (Hevner, 2007).
Design research cycle
Expert review
• Functional analysis • Heuristic inspection
• Cognitive walkthrough w.r.t. PAT’s top tasks (upload MI, search MI and annotate MI
User evaluation
Two tests with real users, students enrolled in the master course on multimodal instructions in Communication and Information Sciences.
• Test 1 at beginning of course, with 9 users • Test 2 after seven weeks, with 4 users
• Three top tasks while thinking aloud • A questionnaire after each task
• Two general usability questionnaires • Interview with the participants.
Expert review: Heuristic inspection
This inspection was based on the 10 heuristic principles by Nielsen (1995). Results showed that, given the three top tasks, the two most severe usability problems were ‘visibility of system
status’ (e.g. lack of feedback to the user, available information and functions were not clear) and ‘consistency and web standards’ (e.g. clickable links styled as text, site behaved not as expected).
An iterative design process allows for the use of various evaluation methods, which contribute to the evaluation of a system in different ways:
Expert evaluation
• is cheap,
• offers a detailed system description,
• helps to overcome obvious issues in a more expensive user evaluation.
User evaluation
• displays multiple aspects of the system, which an expert may overlook,
• provides useful insights in time based efficiency, error count and task completion, • benefits from Think Aloud Protocols (although with concurrent TAP participants continiously need reminders to verbalise their thoughts).
In longitudinal studies, added value in iterative tests may be gained from fresh participants in addition to the original ones.
In between testing, it is advised to not let participants use a beta version of the product for their own work.
In both types of evaluation good communication and collaboration between develo-per and tester are crucial.
Does it work?
Applying formative evaluation methods to test the usability of the PAT Workbench
Faculty of Arts Department of Communication & Information Sciences
User evaluation
Results of the first test show that participants were able to upload and search MIs. However, participants
experienced problems during Task 3 - annotate an MI - with finding the annotation page (3a), as well as with saving and viewing their annotations (3b).
Methods
Introduction
Manon van ‘t Hul h.c.m.van.t.hul@student.rug.nl Ielka van der Sluis i.f.van.der.sluis@rug.nl
Conclusion
Results
Expert review: Cognitive walkthrough
The cognitive walkthrough did not pose as many problems as the heuristic inspection. The top tasks were easy to accomplish by the user. The main problem was a lack of
feed-back or visibility of feedfeed-back and guidance for the user.
During Test 2 participants had encountered only a few problems while executing the tasks. However, overall satisfac-tion had decreased. From the interviews it became apparent that working with a system while it was being develo-ped influenced the participants workload and quality of work.
References
Hevner, A. R. (2007). A three cycle view of design science research. Scanavian Journal of Information
Sys-tems, 19(2), 4.
Nielsen, J. (1995, January 1st). 10 Usability Heuristics for User Interface Design. Retrieved from https:// www.nngroup.com/articles/ten-usability-heuristics/
0 5 10 15 20 25 30 35 40 Visibility of system status
User control and freedom Aesthetic and minimalist design
total severity score number of issues
Figure 1: Key usability problems from heuristic inspection. Figure 2: Problems during tasks from cogntive walkthrough.
32%
21% 47%
Task 1: Upload MI Task 2: Search MI Task 3: Annotate MI
Figure 3: Amount of errors Test 1. Figure 4: Time based efficiency Test 1.
Figure 5: Satisfaction from Test 1 to Test 2.
70 58,8 70,4 56,3 50 55 60 65 70 75 SUS UMUX
Figure 6: Reactions from participants during interview Test 2.
A big problem was that the system did not work
properly updates and Too many
waiting for functionalities I still had to do the annotation in
Excel, this was double work 0,33 0,73 3,57 1,95 0 0,5 1 1,5 2 2,5 3 3,5 4 TBE in goals/sec.
Task 1: Upload MI Task 2: Search MI
Task3a: Find annotation page Task 3b: Save annotation
Will the user know what to do?
No, unclarities about compulsory fields and input.
Does the user know he did the right thing?
No, there is no
feedback when filling in the fields.
Task 1: Upload MI
Will the user know what to do?
Yes, clearly visible button.
Does the user know he did the right thing?
No, successfulness of action is not indicated.
Step 8 Click on button ‘save’. Step 7 Fill out metadata
on form Will the user know what to do?
Yes, there is a clearly visible button.
Does the user know he did the right thing?
No, successfulness of the action is unclear.
Will the user know what to do?
No, the results tab does not stand out enough.
Does the user know he did the right thing?
Yes, results opened in new page. Step 6 Click on button ‘Search’ Step 7 Click on tab ‘result’ Task 2: Search MI