• No results found

User Experience Evaluation in BCI: Bridge the Gap

N/A
N/A
Protected

Academic year: 2021

Share "User Experience Evaluation in BCI: Bridge the Gap"

Copied!
2
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

157   

International Journal of Bioelectromagnetism www.ijbem.org Vol. 13, No. 3, pp. 157 - 158, 2011

User Experience Evaluation in BCI: Bridge the Gap

Bram van de Laar, Femke Nijboer, Hayrettin Gürkök, Danny Plass-Oude Bos, Anton Nijholt

Human Media Interaction, Faculty of EEMCS, University of Twente, Enschede, The Netherlands

Correspondence: B L A van de Laar, Human Media Interaction, Faculty of EEMCS, University of Twente, P.O. Box 217, 7500 AE, Enschede, The Netherlands. E-mail: b.l.a.vandelaar@utwente.nl phone +31 53 4894654, fax +3153 4893503

Abstract. While there is a gap between user-centered human-computer interaction (HCI) research and

the more technology driven brain-computer interface (BCI) research, there are numerous possibilities and advantages for the two fields to help each other. Methods to evaluate the user experience of BCI systems include: 1) involvement of users in the design process, 2) administration of standardized questionnaires, 3) assessment of cognitive and physiological state.

Keywords: Brain-computer interfaces, user experience evaluation, user-centered design, questionnaires,

human computer interaction

1. Introduction

In user-centered design, proper user experience evaluation is one of the most important topics for improving a system. However, most Brain-Computer Interface (BCI) research systems are evaluated with speed and accuracy measures. If we want to measure real usability of a system, the speed and accuracy of the interface are only two of many other relevant components. Measuring user experience on different dimensions and improving BCIs accordingly could boost user acceptance and enjoyment as well as the BCI task performance [Plass-Oude Bos et al. 2010]. Classical user experience evaluation of a system is often done by administering standardized questionnaires, taking qualitative in-depth interviews with users or by observing overt behaviour of users. More recently, also (neuro) physiological sensor input is considered as valuable input for the evaluation of user experience, especially because of the possibility to measure covert behaviour of a user [Mandryk et al. 2006], [Gürkök et al. 2010]. Various evaluation techniques from the field of Human Computer Interaction are well suited to assess what the user is experiencing when using a BCI.

2. Case studies

This section will elaborate on two case studies we did, to exemplify the need for user experience evaluation.

The first one is a BCI game called BrainBasher utilizing the ERD/ERS of imagined and actual movement as described in [Van de Laar et al. 2010]. Users are provided with direct continuous feedback on the classifiers confidence levels whether left, right or no movement is detected. The game was evaluated with a questionnaire, which compared the user experience in the imagined and actual movement conditions. Imagined movement was perceived as more challenging to users although also as mentally more demanding and more tiring. While users preferred the imagined movement for short periods, users would prefer actual movement for prolonged periods of playing this game. Users considered the feedback to be valuable, because they could play around with their strategy.

The second study which studied user experience and user-centered design is IntuiWoW [Plass-Oude Bos et al. 2011], which is based on the popular game World of Warcraft. Users were involved in the design process to choose which mental tasks should be used for certain actions in the game. In the subsequent evaluation of the prototype which incorporated those actions users found the recognition accuracy of the mental task to be most important. In addition, user experience was increased with ease of use, fun, intuitiveness and suitability. Thus, it is not sufficient to focus on recognition accuracy solely to provide optimal user experience.

(2)

158   

3. Recommendations

Structured standardized questionnaires can provide valuable quantitative information about an application. One possible cause for the lack of structured questionnaires for BCI applications is that the way of interacting with an application is inherently different from classical human computer interaction. Also it is difficult to compare between different BCI’s because of different mental tasks and the way the user has to use them. To compare between BCI’s and to pinpoint on which dimensions user experience can be increased, standardized BCI specific questionnaires are required. Current questionnaires on user experience such as the Game Experience Questionnaire [IJsselsteijn et al. 2009] and Engagement Questionnaire [Brockmyer et al., 2009] are not sufficient, because these questionnaires assume that only a traditional method of input (e.g. keyboard and mouse) is used.

As the case studies described in the previous paragraph demonstrated, recognition accuracy, ease of use and applicability of the used mental task play an important role in the user experience. Because different mental tasks provide the user with different experiences, it might prove difficult to make every item in the questionnaire relevant. For example, while evaluating how a flickering stimulus for a SSVEP-based BCI is perceived by the user can be very valuable, in the case of an ERD/ERS BCI this is not applicable. However, [Zander et al. 2010] categorize mental tasks used for BCIs into three different groups: passive, active and reactive. Within these groups, mental tasks should, at least for the sake of user experience evaluation, be largely comparable.

Therefore, we propose to develop a questionnaire with modules for each category. Within these categories there can be specific questions about the way the user is interacting with the system.

For example, for passive BCIs items in the questionnaire can ask the user if the BCI hardware is comfortable and does not distract from the main task at hand.

For active BCIs, items on applicability of the mental tasks and perceived speed of the BCI on the user’s actions can give valuable information. Also the time that is needed to train the system and the ability to retain that training model over time are important for the user experience. To perform an active mental task a user needs a certain amount of concentration. Over time this will fatigue the user and light headaches are not unlikely to develop. Trivially this is important to the user experience.

When developing and evaluating reactive BCIs, we are more interested in the obtrusiveness of the stimuli that are used. In the case of the aforementioned SSVEP BCI, the flickering of the stimulus is needed to make the BCI work, but variations in size, colour and texture can make a big difference in how the user perceives the obtrusiveness of the stimulus.

To get BCI research one step further and to bridge the gap between the technology and the user, we need to develop and incorporate standardized measures inspired by HCI. While (neuro) physiological measures are still in (early) development, standardized questionnaires can provide valuable information. Acknowledgements

The authors acknowledge the support of the BrainGain Smart Mix Programme. This work was partially supported by the ITEA2 Metaverse1 (www.metaverse1.org) Project.

References

Brockmyer JH, Fox CM, Curtiss KA, McBroom E, Burkhart KM, Pidruzny JN. The development of the Game Engagement Questionnaire: A measure of engagement in video game-playing. J. Exp. Soc. Psychol., 45(4): 624-634, 2009.

Gürkök H, Plass-Oude Bos D, Van de Laar BLA, Nijboer F, Nijholt A. User Experience Evaluation in BCI: Filling the Gap IJBEM 2010. (to appear)

IJsselsteijn WA, de Kort, YAW, Poels K. The Game Experience Questionnaire: Development of a self-report measure to assess the psychological impact of digital games. Manuscript in preparation.

Mandryk RL, Inkpen KM, Calvert TW. Using psychophysiological techniques to measure user experience with entertainment technologies. Behav. Inform. Technol., 25(2): 141-158, 2006.

Plass-Oude Bos D, Gürkök H, Van de Laar BLA, Nijboer F, Nijholt A. User Experience Evaluation in BCI: Mind the Gap! IJBEM 2010. (to appear)

Plass-Oude Bos D, Poel M, Nijholt A. A Study in User-Centered Design and Evaluation of Mental Tasks for BCI. In proceedings of the 17th Int. Conf. On MultiMedia Modeling 2011. (accepted)

Van de Laar BLA, Reuderink B, Plass-Oude Bos D, Heylen DKJ. Evaluating User Experience of Actual and Imagined Movement in BCI Gaming. Int. J. of Games & Computer Mediated Simulations, 2010. (to appear)

Zander T, Kothe C, Jatzev S and Gaertner M. Enhancing Human-Computer Interaction with Input from Active and Passive Brain-Computer Interfaces Brain-Computer Interfaces: Applying our Minds to Human-Computer Interaction. Tan D, Nijholt A, Editors. Springer, London, UK, 2010, 149-178.

Referenties

GERELATEERDE DOCUMENTEN

datzelfde boek zou publiceren. 106 Furly schrijf in zijn brieven dat de twee heren na het voorval nooit meer met elkaar gesproken hebben, maar helaas maakt hij geen verdere

Comparing both junior level and senior level researchers between fields shows that regarding the dimension of societal impact natural science seniors generally seem to attribute

1.6.2 The empirical study will focus on the packages offered by the three mobile operators a year before the introduction of reduced mobile termination rates

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of

Verification textes mathematiques jar un ordinateur. Le probleme de ve'rification des textes mathdmatiques est au fond le probleme de d6finir un.langage. I1 faut que ce

As can be expected, the freedom to shift power from one TX to another, as pro- vided by the total power constraint, gives the largest gains when the TXs see channels with

i) Inleiding, gehouden 13 November 1948 op het conferentie-weekend te Doorn, georganiseerd d6or de Wiskunde-werkgroep der W.V.O.. sproken en enkelen hebben zelfs reeds een

The Evaluation Factors that have been used are Fit, Quality of the overall brand and the extensions and the Degree of Coffee Expertise.... The user, the knower and