• No results found

Multimodal and Multi-Brain Computer Interfaces: A Review

N/A
N/A
Protected

Academic year: 2021

Share "Multimodal and Multi-Brain Computer Interfaces: A Review"

Copied!
5
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Multi-Modal and Multi-Brain-Computer Interfaces

A review

Anton Nijholt

Faculty EEMCS, Human Media Interaction

University of Twente

Enschede, the Netherlands anijholt@cs.utwente.nl

Abstract—In this short paper we survey recent research views in non-traditional brain-computer interfaces. That is, interfaces that can process brain activity input, but that are designed for non-clinical purposes, that are meant to be used by ‘healthy’ users, that process other user input as well, and that even allow input, voluntarily or involuntarily, from multiple users. This is a rather new research and BCI application area. Control of applications can be made more robust by fusing brain activity information with other information, either explicitly provided by a user (such as commands) or extracted from the user by interpreting his or her behavior (movements, posture, gaze, facial expression, nonverbal speech) or sensing (neuro-)physiological characteristics (skin conductivity, heart rate, brain activity). We see also research where brain activity of multiple users is fused to get a (sometimes critical) control task done. We see also the emergence of applications where BCI measurements help to improve group performance, for example, during meetings, or where information obtained from these measurements helps to learn about audience preferences. Multi-brain BCI has also become a paradigm in artistic and game applications. Artistic interactive BCI applications may require audience participation. In game environments brain activity of various players can be used to control or adapt the game. Both competition and collaboration in serious games, entertainment games and artistic installations requires fusion of EEG measurements from different subjects.

Keywords: Brain-Computer Interfaces (BCI); multi-modal user interfaces; passive BCI; multi-brain computing; hybrid BCIs; BCI games; multimodal interaction; human-computer interaction; artistic BCI applications

I. INTRODUCTION

Brain-Computer Interface (BCI) research used to focus on BCI applications for disabled users. Users were addressed who had BCI control as the only way of communicating with the ‘external’ world. This has changed completely in recent years. BCI is now becoming integrated with multimodal interaction research, where BCI is only one of the possible modalities that provide information about what the user wants and experiences.

What the user ‘wants’ is usually about direct control of the application. How can we intentionally control the environment and make changes to the environment by multimodal interaction? That is, have the environment understand our

intentional interactions with the environment. This interaction can be done by traditional interfaces and devices, such as keyboard, joystick, and remote controllers. But nowadays we can also use more human-like interaction modalities, including gestures, facial expressions, gaze behavior, body language, and physiological information, to control an environment or an application.

In human-human interaction social signals can be distinguished that we unconsciously generate, analyze, and interpret when interacting with our (conversational) interaction partners [1]. Natural interaction with our digitally enhanced physical environments requires that such environments, including tangibles, robotics, and wearables, are aware, understand and are able to provide natural feedback to this natural input. We can distinguish between (1) executing commands and making otherwise intentionally demanded changes to properties of our interaction environment, and (2) having changes in the environment and how to interact with it because of information that human interactants provide involuntarily, for example, social signals or (neuro-) physiological signals that provide information about our mental and emotional state.

There is no clear-cut distinction between these two viewpoints. Firstly, commands given by a user can be given a different interpretation because of the mental or emotional state of the user. In fact, this may lead to a situation where the environment knows better than the user. Secondly, a user can learn how a particular interaction environment interprets naturally occurring social and physiological signals. It makes it possible to attempt to manipulate them and turn them into commands that aim to control the environment. For games this can be a useful add-on to other, more explicit commands, usually using more traditional input modalities (mouse, keyboard, and joystick). We can say that in a sense we are displaying deceptive behavior. It can be interesting to integrate such behavior in (serious) game situations.

It becomes interesting when we look at applications that exploit these viewpoints. We can have applications that become more robust or allow more natural interaction when available brain activity information is added to other information that is provided to the interface in order to get a certain task done. Or the other way around, we want to give a command that has to follow from the interpretation of

(2)

consciously manipulated brain activity, but it is supported by information obtained from other user information providing modalities. Somewhere in the processing track (from signal analysis to decision making) information obtained from these different modalities of a single user needs to be integrated (fused).

In addition to single user applications, nowadays we see also many BCI applications that assume multiple users. A game can have competing and collaborating BCI users. More generally, we can look at BCI applications where all users are using BCI, some of the users are using BCI, and, both in decision making, serious and entertainment game situations, users and teams of users can collaborate and compete. In task- and decision-oriented multi-participant situations, brain activity measurements provide information that can support task performance in real-time and off-line analysis that supports future task performance (e.g., analysis of task performance of individual team members). In game, entertainment, and (serious game) simulation situations we can also think of naturally emerging and intended competition and collaboration behavior. In artistic applications we have the artist who designs the roles of performers and audience members. Rather than passively enjoying a performance, participants in an artistic BCI installation enjoy their active participation and contribute to a joint art performance.

In the next section (section II) we have some observations on developments in BCI and BCI applications that have led to multi-modal and multi-brain computer interfaces. These developments already started in the late sixties and early seventies of the previous century. In artistic explorations of EEG brain activity measurements as early as the nineteen seventies we often see artists exploring brain activity patterns of two users or more to create a playful application or a piece of art [2,3]. Since nowadays we have sensors and actuators everywhere, and have wearable sensors that allow the measuring of behavioral and physiological information, we may also assume that wearable sensors collect our brain activity and have the Internet of Things interpret this activity in the context of our daily multi-modal activities. We end this short paper with some conclusions in section III.

II. EXPLORING BCIIN NON-CLINICAL GAME AND HCI CONTEXTS

In 1968 Psychology Today published a study by Joe Kamiya on conscious control of brain waves [4]. The paper, based on earlier work of 1962 and submitted in 1966, is about EEG monitoring of alpha rhythms and the ability of subjects to consciously control their alpha waves. In the paper there are references to Zen mediators and LSD experiences, probably inspired by the 1968 ‘Zeitgeist’. At that time artists had already started to explore monitoring and controlling brain activity for playful and artistic applications. We have some mentioning of this work in the following subsections.

More influential, especially in the clinical world, was a paper by Jacques J. Vidal, published in 1973 [5]. In this paper on collecting and distinguishing EEG signals from different brain areas Vidal mentions spontaneous or ongoing electrical activity and buried in them wave forms can be found that are

evoked, for example, by external stimuli. Vidal made the following observation: “Can these observable electrical brain signals be put to work as carriers of information in man-machine communication or for the purpose of controlling such external apparatus as prosthetic devices or spaceships?” And, in addition, “Even on the sole basis of the present states of the art of computer science and neurophysiology, one may suggest that such a feat is potentially around the corner.” In the years that followed clinical BCI research focused on distinguishing and evoking internal stimuli and how to use external stimuli for the benefit of patients and disabled persons. Patients can control prosthetic devices or a wheelchair by imaging movements, rather than performing these movements. External stimuli can help to determine a patient’s interest, his or her emotional state and assist the patient in decision making. Research efforts have focused on providing ALS (Amyotrophic Lateral Sclerosis) patients with communication abilities.

A. Multimodal Interaction Revisited

In multimodal interaction research the focus is on human-computer interaction research and applications that aim at detecting, integrating and interpreting information that can be obtained by a computer from perceiving behavior and activity of its human interactants. The interface between human(s) and computer can be a keyboard, but also an interface that understands speech, gestures, facial expressions, eye gaze, body language and (neuro-) physiological information. BCI can be integrated as one of the many modalities in this kind of research. Early applications of BCI research in HCI dealt with measuring mental load, for example, to evaluate alternative interface designs, not yet aiming at on-line adaptation of an interface to the user. In later years, in HCI research, we can also find applications of BCI research that aim at close to real-time adapting interface properties and interactions to brain activity measurements [6]. More recently the HCI community has embraced a user’s brain activity as an input modality that needs to be integrated with information obtained from other input modalities.

At this point it is useful to introduce some views on multimodal interaction as they were introduced many years ago and see how they provide insight on integrating BCI in multimodal human-computer interaction research. It should be mentioned that during these early years of multimodal interaction research the emphasis was on giving an interface commands that reflected human-human interaction commands or requests. “Tell me more about this one.” while pointing at a map that shows the position of army divisions. Understanding this multimodal utterance requires the fusion of information made available from speech and gesture recognition devices. We will return to this issue later in this section. There are of course many more examples of human and human-computer interaction where the understanding of an utterance or an activity can profit from information provided by (nonverbal) speech, eye gaze, gestures, body language, facial expression, or (neuro-) physiological information.

Multimodality is about processing two or more input modes in a coordinated way. We can have multimodality in parallel, requiring simultaneous processing of signals, or sequential, alternating modalities. An example of parallel

(3)

modality is a situation where a facial expression supports or is in contrast with nonverbal speech or the contents of a message that is delivered in an utterance. This allows for more natural interaction and it allows uncertain or missing information in one modality to be compensated with information that is obtained from another modality. It helps to get a more complete picture of a computer’s conversational partner, or, more concretely, to disambiguate his or her commands. In a sequential or alternate processing of modalities we process the modality signals in sequence. We do not know in advance whether there will be signals coming from a next and maybe different modality that will modify the interpretation of the previous modality signals. And, when the aim is to have real-time natural interaction behavior, we should not wait for information that can be obtained from signals that are detected later in the human-computer interaction. Unless it can be done in a way that we expect from a human person that re-interprets his or her previous judgements because of new information arriving.

Oviatt [7] introduced ten myths of multimodal interaction. One of them is the assumption of simultaneity. According to Oviatt simultaneity is more an exception than a regular appearance of multimodality. But clearly, in her views she mainly addresses speech and gestures and she and many others who have been investigating multimodal interaction do not address the more nontraditional information detecting from (neuro-) physiological sensors. This (neuro-) physiological information can help to adapt the interface to the user, to better interpret information coming from other modalities, or, in the case of a brain-computer interface, it can even be used to issue commands.

B. Multimodality and BCI Research

What about BCI and multimodality? One obvious observation is that brain activity is a continuous process. There is always brain activity, it can be measured and it can be related to our cognitive and emotional states, and the tasks we are performing or intend to perform. Moreover, we can measure brain activity that arises from explicit external stimuli and there is also the possibility that we can manipulate our brain activity, for example, trying to relax, simulating an aggressive mental state, imagine performing a complex task, or imaging a particular movement.

Hence, when investigating multimodality that includes the Brain Activity Modality we can either

(1) Support the interpretation of traditional interaction modalities by taking into account brain activity information that tells us about the mental state of the user

(2) Add to the interpretation of traditional interaction modalities by taking into account brain activity information that appears because of an action that addresses one or more particular interaction modalities.

(3) Issue commands by consciously activating brain regions (and have this activity translated into commands) or allowing external stimuli to change

brain activity patterns (and use the detection of changes to issue commands). Information obtained from other modalities can of course help to find the right interpretation of not necessarily perfect signals obtained from brain signals.

Interestingly, already in the sixties and seventies of the previous century we can see artists exploring brain activity information for artistic expression [2]. Creative use can be made of brain activity measurements, embedded in a context of multiple modalities, to design creative, playful and artistic applications. In these interactive art pieces we have people synchronizing their brain activity in order to obtain an aesthetic pleasing visualization or have their brain activity manipulate a soundscape or transform brain activity into playing a new instrument or an electronic modification of a traditional instrument during a music performance.

HCI applications came later. One obvious application is comparing different interface designs by EEG measurements of mental workload while using the interfaces. Task classification using EEG was done in Lee and Tan [8]. Chen and Vertegaal [9] introduced EEG measurements of mental workload to manage, in real-time, interruptions in what they called physiologically attentive user interfaces. It is an example of ‘passive BCI’. Brain activity is measured continuously, that is, it is monitored and the interface decides how to use this information in future interactions with the user. Passive BCI is also discussed in Cutrell and Tan [10] and in Girouard [11]. A survey of early passive brain-computer interfaces can be found in George and Lecuyer [12]. Zander et al. [13] claimed the notion of passive BCI, but clearly we can find passive BCI applications in the previous century and in the papers just mentioned.

C. Multimodality and BCI Communities

It is useful to note that there are different BCI communities. Researchers in BCI applications with a clinical background turn out not to be interested in playful and game-like applications of BCI. Their ideal user is an ALS patient and other than the BCI modality where other includes behavior modelling, task modelling, and common sense modelling, such other knowledge and also interaction modalities, are not yet considered and made part of their research. BCI researchers do not necessarily have a background in computer science or human-computer interaction. Human-computer interaction research, artificial intelligence research and entertainment technology research are research contexts with which traditional and clinical BCI researchers are not familiar.

In clinical BCI research there have hardly been attempts to research BCI beyond one particular BCI paradigm. The assumption is that a BCI needs to be responsible for providing full and unique assistance to a user who has no other possibilities to communicate with the world than through a BCI interface. In more recent years, rather than embedding BCI research in a multimodal interaction context, traditional BCI researchers introduced the notion of hybrid BCI [14]. Rather than have their BCI research fully embedded in multimodal and human-computer interaction research,

(4)

including research on artificial intelligence, we see only modest attempts to redefine BCI in such a way that a BCI interface allows the use of more than one BCI paradigm. In [14] both simultaneous and sequential use of different BCI paradigms is considered, in particular ERD (motor imagery) and SSVEP. An early review of hybrid BCIs is [15].

In computer science and human-computer interaction research BCI has been welcomed as an additional source of information that can help to have more robust and enjoyable interaction with a multimodal interaction device, a wearable, or an environment with embedded sensors and actuators. In our research group we experimented with many modalities and BCI paradigms. For example, rather than use imagery movement, use brain activity that follows from real movement [16], have simultaneous use of keyboard and relaxation (alpha activity) in a BCI adaptation of the World of Warcraft game [17], integrate SSVEP and alpha activity in the Bacteria Hunt game [18], or have multiple players cooperating in a game where SSVEP, keyboard, speech and alpha activity can be used to navigate and issue commands in a game (Mind the Sheep!) [19].

Sequential and parallel (simultaneous) multimodal interaction was discussed in Nigay and Coutaz [20]. A roadmap for multimodal research was developed in 2001 at a Dagstuhl seminar [21. In this roadmap there is no mentioning of BCI but it mentions non-obtrusive multimodal sensors and it mentions biometrics as an enabling technology. Fusion of information coming from different modalities is needed in the case of simultaneous multimodality in order to get a correct interpretation of the user’s input, voluntarily or involuntarily. We can look at BCI as a means to process brain activity information (from EEG) that has to be integrated with information that is obtained simultaneously from other input modalities such as eye gaze, speech, facial expressions, gestures, physical body movements, or other physiological signals. Early integration or low-level processing and late integration (at the decision level) and their advantages and disadvantages are discussed in a review paper by Matthew Turk [22]. Late integration may miss possible cross-modal interactions. Usually three possible levels for fusion are distinguished: the signal level, the feature level and the decision (or application) level.

D. Hyper Scanning and Multi-Brain BCI

BCI research is not necessarily aimed at patients. We can look at domestic applications, challenging game applications, and applications that know about our preferences and take into account our emotional state. BCI has also been introduced in situations that assume or require brain activity input from more than one user. In fact, many first BCI explorations were done by artists and often incorporated more than one user [2]. As early as 1974, artist Nina Sobell [23] introduced a game where two subjects were required to synchronize their brain activity, measured by an EEG device.

In [24] simultaneous scanning of multiple brains is discussed. Causality patterns were studied while the subjects were performing a cooperative game. The authors introduced the name ‘hyperscanning’ for this way of EEG recording. The

aim of this research is to get informed about brain patterns that are related to social interaction (social neuroscience). A review paper on hyperscanning methodologies, studies and devices was published online in 2012 [25]. In Stevens et al. [26] team cognition is studied using wireless EEG headsets and focusing on attention, engagement and mental workload of the members of a team in a cooperative (serious) game.

In [27,28] we introduced the name multi-brain computing for BCIs that required input from multiple subjects (often gamers) that are either competing or collaborating. This is different from hyperscanning since patterns related to interaction are less relevant - or not relevant at all - compared with, for example, the commands that need to be given by the gamers’ voluntary control of brain activity. Take as an example a game or serious application with two players. Using motor imagery or relaxation they can compete, each one trying to have a virtual ball rolling in the direction of the opponent’s goal. Hence, we have to ‘subtract’ their brain activity in order to see who wins. In a different situation we can for example use P300. Who is first with making a decision, that is, who is fastest in achieving the necessary accuracy needed to make a particular decision about how a game should proceed, a weapon can be chosen or a gun can be fired? Players that collaborate can join their brain power in order to decide about movements and making decisions. Teams of collaborating players can compete with each other. Many more examples and applications can be given. In many artistic BCI applications we see participants playing with visualizations or audifications that follow from their joint brain activity or they modify an existing soundscape or animation. We refer to [28] for a recent comprehensive survey of the literature on multi-brain computing.

Clearly, also for competing or collaborating multi-brain applications we need, similar to what we saw in multimodal applications, to take care that we fuse the multiple brain activity. However, unlike in the multimodal case where we integrate information originating from different modalities, here we have one modality, although different brain paradigms can be involved. Nevertheless, we can again distinguish data fusion at the (brain) signal level, the feature level and the decision level. In [29] a cooperative movement planning task is discussed using ERP for directional cues. The authors considered three fusion methods: ERP averaging (signal level), feature concatenation, and voting (decision level), leading to different accuracies (feature concatenation 84%, ERP averaging 92%, voting 95%), where a single user scored 66%. Observations on different levels of fusion for multi-brain BCI can also be found in [30]. More research is needed in order to be able to decide which method for which kind of application and BCI paradigm should be preferred.

III. CONCLUSIONS

In this short paper we discussed some recent developments in brain-computer interfacing. Many new developments appear because the growing interest in BCI for non-clinical applications. There is interest for game, domestic, and artistic applications. Especially in game and artistic communities we see experiments and BCI installations that involve multiple users and combinations of various modalities. The number of

(5)

such applications will grow and will also stimulate the introduction of new and cheaper EEG devices.

IV. REFERENCES

[1] A. Vinciarelli, M. Pantic, and H. Bourlard, Social signal processing: Survey of an emerging domain. Image and Vision Computing 27(12), 2009, pp. 1743–1759.

[2] D. Rosenboom Ed., Biofeedback and the Arts: results of early experiments, A.R.C. Publications, Vancouver, 1976.

[3] H. Gürkök and A. Nijholt, “Brain-computer interfaces for arts,” in Proceedings of the 2013 Conference on Affective Computing and Intelligent Interaction (ACII 2013), 2013, Geneva, Switzerland. pp. 827-831. IEEE Computer Society.

[4] J. Kamiya. “Conscious control of brain waves,” Psychology Today, 1968, 1(11) pp. 56-60.

[5] J. Vidal, “Toward direct brain-computer communication”, in Annual Review of Biophysics and Bioengineering, L.J. Mullins, Ed., Annual Reviews, Inc., Palo Alto, Vol. 2, 1973, pp. 157-180.

[6] D.S. Tan and A. Nijholt, Brain-Computer Interfaces and Human-Computer Interaction, Chapter in Brain-Human-Computer Interfaces: Applying our Minds to Human-Computer Interaction. Human-Computer Interaction Series. Springer Verlag, London, pp. 3-19.

[7] S. Oviatt, Ten myths of multimodal interaction, Communications of the ACM, November 1992, Vol. 42, No. 11, pp. 74-81.

[8] J.C. Lee and D. S. Tan, “Using a low-cost electroencephalograph for task classification in HCI research,” In Proceedings of the 19th annual ACM symposium on User interface software and technology (UIST '06). ACM, New York, NY, USA, 2006, pp. 81-90.

[9] D. Chen and R. Vertegaal, “Using mental load for managing interruptions in physiologically attentive user interfaces,”.. In CHI '04 Extended Abstracts on Human Factors in Computing Systems (CHI EA '04). ACM, New York, NY, USA, 2004, pp. 1513-1516.

[10] E. Cutrell and D. S. Tan, “BCI for passive input in HCI,” in Proc. Workshop on Brain-Computer Interfaces for HCI and Games, joint with ACM CHI Conference on Human Factors in Computing Systems, 2007. [11] A. Girouard, “Adaptive brain-computer interface,” in CHI '09 Extended

Abstracts on Human Factors in Computing Systems (CHI EA '09). ACM, New York, NY, USA, 2009, pp. 3097-3100.

[12] L. George and A. Lécuyer, “An overview of research on ‘passive’ brain-computer interfaces for implicit human-brain-computer interaction,” in International Conference on Applied Bionics and Biomechanics ICABB 2010-Workshop W1 'Brain-Computer Interfacing and Virtual Reality', Oct 2010, Venice, Italy.

[13] T. O. Zander, C. Kothe, S. Welke, and M. Roetting. “Utilizing Secondary Input from Passive Brain-Computer Interfaces for Enhancing Human-Machine Interaction,” in Foundations of Augmented Cognition. Neuroergonomics and Operational Neuroscience. Lecture Notes in Computer Science 5638, Springer Berlin Heidelberg, 2009, pp. 759-771. [14] G. Pfurtscheller, et al., “The hybrid BCI,” Frontiers in Neuroscience, 21

April 2010.

[15] S. Amiri, R. Fazel-Rezai, and V. Asadpour, “A review of hybrid brain-computer interface systems,” Advances in Human-Computer Interaction, vol. 2013, ID 187024, 8 pages.

[16] B.L.A. van de Laar, D. Plass-Oude Bos, B. Reuderink, and D.K.J. Heylen, “Actual and imagined movement in BCI gaming,” in Proceedings of Artificial Intellingence and Simulation of Behaviour (AISB 2009), 06-09 Apr 2009, Edinburgh, Scotland. SSAISB, Brighton. [17] D. Plass-Oude Bos, B. Reuderink, B.L.A. van de Laar, H. Gürkök, C.

Mühl, M. Poel, A. Nijholt, and D.K.J. Heylen, “Brain-Computer Interfacing and games,” Chapter in Brain-Computer Interfaces. Applying our Minds to Human-Computer Interaction. Human-Computer Interaction Series. Springer Verlag, London, 2010, pp. 149-178. [18] C. Mühl et al., “Bacteria Hunt: A multimodal, multiparadigm BCI

game,” in Proceedings of the Fifth International Summer Workshop on Multimodal Interfaces, eNTERFACE'09, 2010, Genova, Italy. pp. 41-62. [19] H. Gürkök, “Mind the sheep! User experience evaluation & brain-computer interface games,” PhD thesis, University of Twente, the Netherlands, 2012.

[20] L. Nigay and J. Coutaz, “A design space for multimodal systems: concurrent processing and data fusion,” in Proceedings of the INTERACT '93 and CHI '93 Conference on Human Factors in Computing Systems (CHI '93), ACM, New York, USA, 1993, 172-178. [21] H. Bunt, M. Kipp, M. Maybury, and W. Wahlster, Fusion and

coordination for multimodal interactive information presentation, Chapter in O. Stock and M. Zancanaro, eds., Multimodal Intelligent Information Presentation, Springer, Berlin, 2005, pp. 325–339. [22] M. Turk, “Multimodal interaction: A review,” Pattern Recogn. Lett. 36

(January 2014), pp. 189-195.

[23] N. Sobell, “Streaming the brain,” IEEE Multimedia 9 (3), 2002, pp. 4–8. [24] F. Babiloni et al., “Hypermethods for EEG hyperscanning,” in

Proceedings of the 28th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), New York, USA, 2006, Vol. 1, pp. 3666–3669.

[25] F. Babiloni and L. Astolfi, "Social neuroscience and hyperscanning techniques: past, present and future," Neurosci Biobehav Rev. 2014 Jul;44:76-93. doi: 10.1016/j.neubiorev.2012.07.006. Epub 2012 Aug 13. [26] R. Stevens, T. Galloway, C. Berka, A. Behneman, “A neurophysiologic

approach for studying team cognition”, in Proceedings Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC), 2010, pp. 1–8.

[27] A. Nijholt and H. Gürkök, Multi-Brain games: cooperation and competition, in Universal Access in Human-Computer Interaction, UAHCI 2013, 21-26 July 2013, Las Vegas, USA, Lecture Notes in Computer Science 8009. Springer Verlag, Berlin, pp. 652-661. [28] A. Nijholt, “Competing and collaborating brains: multi-brain computer

interfacing,” Chapter 12 in Brain-Computer Interfaces: Current trends and Applications, A.E. Hassanieu and A.T. Azar, Eds. Springer International Publishing Switzerland, 2015, pp. 313-335.

[29] Y. Wang, T.P. Jung, “A collaborative brain-computer interface for improving human performance,” PLoS ONE. www.plosone.org 6 (5), e20422:1–11, 2011.

[30] L. Bonnet, F. Lotte, A. Lécuyer, “Two brains, one game: design and evaluation of a multi-user BCI video game based on motor imagery,” IEEE Trans. Comput. Intell. AI Games 5 (2), 2013, pp. 185–198.

Referenties

GERELATEERDE DOCUMENTEN

van Deursen and van Dijk have recently developed a framework of five Internet skills: operational skills (technical skills to direct digital media), formal skills (browsing and

anspruchsvollen Ziel, forschungs- und innovationsbezogene Entscheidungen und Praktiken stärker als bisher auf gesellschaftliche Bedarfe hin auszurichten und zugleich Fragen der

(a) Comparison of the absorption wavelength of the Mg-OH absorption feature of chlorite for both the hydrothermal and metamorphic datasets (y-axis) and the magnesium number calculated

Local newspapers are in that sense ‘club goods’; many people paying together create benefits for us all.. And modern society suffers from people’s increasing unwillingness to pay

Relative error of the discharge prediction at (a) Sanggau and (b) Rasau; for Sanggau, the velocity profile coeffi- cient is predicted from the water level, at Rasau it is kept

The nearly constant bandwidth and high reflectivity are rationalized by multiple Bragg interference that occurs in strongly interacting photonic band-gap crystals, whereby the

The results of the study indicate the following as the factors that influence sport participation among students in selected secondary schools in Pretoria: Sports conflicting

Nurse-initiated management of ART- trained nurses’ negative attitudes towards TB/HIV guidelines entailed lack of agreement with the guideline, poor motivation, support