BACHELOR THESIS
Improving the engagement and experience of museum visitors by means of EEG and interactive
screens of VidiNexus
Simone Luiten S2075407
Examination committee:
Dr. M. Poel
Drs. N. M. Bierhuizen VidiNexus supervisor:
M. Markslag
Faculty of Electrical Engineering, Mathematics and Computer Science University of Twente
P.O. Box 217
7500 AE Enschede
The Netherlands
July 2021
1
Abstract
The main goal of this research is to explore options to improve the experience and engagement of museum visitors by developing a prototype including Brain-Computer interfaces (BCI) and the interactive screens of client VidiNexus. This is done by following a methodology that is focused on three different aspects of the research; Museum & Art, BCI, and the prototype. The first two aspects are the focus of the background literature research. The findings are used to guide the creative process of the prototype development in the right direction. A prototype of the system, including an interactive quiz, matches visitors to artworks based on choices and engagement levels measured by an EEG device.
This prototype was created during the ideation, specification, and realization phase of the research;
and tested in the evaluation phase.
In conclusion, the prototype that has been developed and tested proves that the concept for an art-
visitor feedback system is feasible. The tests showed that is technically possible to realise such a
working prototype of such a system. The positive feedback from the users showed that the prototype
added value to the users, as it improved their engagement and they learned more about several art
aspects. Results of the user tests showed that the prototype is able to improve the experience and
engagement of participants. The prototype is described as fun and intriguing and participants were
excited about the fact that their brainwaves were part of the matching process. Additionally, almost
all participants would recommend the system to others if it was placed inside a real museum. However,
before the system can be placed inside a real museum, adjustments to the quiz, the equipment, and
the art database have to be made.
2
Acknowledgments
First of all, I want to thank Mannes Poel and Nienke Bierhuizen for thoroughly assisting me throughout the whole process of the graduation project and for providing valuable feedback on my report.
Secondly, I would like to thanks Maurice Markslag, your enthusiasm along with your good communication made you a pleasure to work with. Your feedback on my concepts was highly valued.
Next, I want to thank all participants who participated in the user tests. Without your participation,
the research would not have been possible. Lastly, I would like to thank family and friends who helped
me during the whole research process by providing personal feedback and support.
3
Table of contents
Abstract ... 1
Acknowledgments ... 2
Table of Figures ... 6
Chapter 1: Introduction ... 7
1.1 Introduction and problem description ... 7
1.2 Research questions... 8
1.3 Scope of the research ... 9
1.4 Methodology ... 9
1.5 Structure of the report ... 10
Chapter 2: State of the art research ... 11
2.1 Museums & Art ... 11
2.1.1 Visitor experiences of art ... 11
2.1.2 Visitor interactions inside museums ... 13
2.2 Brain-Computer Interfaces (BCI) ... 15
2.2.1 BCI systems ... 15
2.2.2 BCI in the entertainment industry ... 16
2.2.3 BCI to improve the user engagement in museums ... 17
2.3 Electroencephalogram (EEG) ... 19
2.3.1 EEG measurements ... 19
2.3.2 EEG systems ... 19
2.4 Emotion recognition ... 23
2.4.1 Emotion classification models ... 23
2.4.2 EEG emotion recognition methods ... 24
2.5 Interviews ... 28
2.5.1 The MuseumFabriek ... 28
2.5.2 Concordia... 29
2.5.3 Philosopher of science; University of Twente ... 30
2.5.4 Artist (1) ... 31
2.5.5 Artist (2) ... 31
2.5.6 Museum visitors ... 32
2.6 Conclusion ... 34
Chapter 3: Ideation ... 36
3.1 Stakeholder analysis ... 36
3.2 Design requirements ... 38
3.3 Brainstorming ... 40
4
3.3.1 First concepts ... 40
3.3.1 Technical feasibility check ... 41
3.3.2 Continuation of the brainstorming process ... 42
3.4 Final concept ... 43
Chapter 4: Specification ... 44
4.1 User experience ... 44
4.1.1 Scenario’s ... 44
4.1.2 User experience requirements ... 45
4.2 Technical design specification ... 45
4.2.1 Quiz ... 45
4.2.2 Database ... 46
4.2.3 EEG device ... 46
4.3 Visual design specification... 46
4.3.1 Start screen... 47
4.3.2 Baseline screen ... 47
4.3.3 Question screen ... 48
4.3.3 Result screen ... 49
4.4 Pilot test: Design specifications ... 49
4.4.1 Results pilot test ... 50
Chapter 5: Realisation ... 52
5.1 Technical design realisation ... 52
5.1.1 Functional architecture structure... 52
5.1.2 Logical flow chart ... 53
5.1.3 Art database ... 54
5.2 Visual design realisation ... 56
5.2.1 Infographic ... 56
5.2.2 Quiz interface ... 57
Chapter 6: Evaluation ... 59
6.1 User Testing 1 ... 59
6.2 Results user-test 1 ... 60
6.3 User Testing 2 ... 64
6.4 Results user-test 2 ... 65
6.5 Conclusion user-tests ... 67
Chapter 7: Conclusion ... 69
Chapter 8: Future work ... 71
References ... 73
5
Appendices ... 77
Appendix A: Expert interview protocol and questions ... 77
Appendix B: Visitor interview protocol and questions ... 78
Appendix C: Information brochure and informed consent; Interviews ... 79
Appendix D: Brainstorm table ... 81
Appendix E: Mind map: Interactive application ... 83
Appendix F: Pilot test questions ... 84
Appendix G: Results survey: pilot test ... 85
Appendix H: Python code link ... 90
Appendix I: Art Database ... 90
Appendix J: Information brochure and consent form; User testing ... 91
Appendix K: Survey questions user test 1 ... 93
Appendix L: Observation table user tests ... 94
Appendix M : Results survey: user test 1 ... 94
Appendix N: Survey questions user test 2 ... 100
6
Table of Figures
Figure 1: The EXPOmatic interactive screen of VidiNexus [6] ... 7
Figure 2: The methodology schematic of the entire research ... 10
Figure 3: A model of aesthetic experience by Leder et al. [8] ... 12
Figure 4: Different BCI measurement methods [3] ... 15
Figure 5: Overview of BCI device choices [15] ... 16
Figure 6: The 10-20 electrode EEG method [23] ... 19
Figure 7: The Emotiv headset [28] Figure 8: The Emotiv headset electrode names [28] ... 20
Figure 9: Raw data of all electrode from the Emotiv displayed on the Emotiv app: [28] ... 20
Figure 10: The MUSE headband with specifications [30] ... 21
Figure 12: Basic emotions on the dimensional model [20] ... 24
Figure 13: Classic emotion recognition method steps: [3] ... 24
Figure 14: Emotional states linked to their frequency ranges [12]. ... 25
Figure 15: Brain Computer Interfaces overview and order of method [3] ... 26
Figure 16: Stakeholder analysis ... 37
Figure 17: WOW-graph of the first four concepts ... 41
Figure 18: Schematic overview of the system ... 43
Figure 19: An overview of the different states of the quiz ... 47
Figure 20: The first iteration of the start screen (left) and the baseline screen (right) ... 48
Figure 21: First iteration of a question screen (left) and the result screen (right) ... 49
Figure 22: Simple sketch of the functional architecture structure ... 52
Figure 23: Logical flow chart of the code ... 53
Figure 24: Screenshot of part of the art database ... 55
Figure 25: Last iteration of the infographic ... 56
Figure 26: Last iterations of the start and menu screen ... 57
Figure 27: Last iterations of the question and result screen ... 58
Figure 28: The experimental set up ... 60
Figure 29: Participant with too long/thick hair for the Emotiv EPOC to receive accurate signals ... 62
Figure 30: Graph of the Engagement level results user test 1 ... 63
Figure 31: Graph of the average engagement levels - user test 1 ... 64
Figure 32: New implementations of the screens for user test 2 ... 65
Figure 33: Graph of the engagement levels results of user test 2 ... 66
Figure 34: Graph of the average engagement level results of user test 2 ... 66
Figure 35: The final system displayed on VidiNexus screen ... 70
7
Chapter 1: Introduction
1.1 Introduction and problem description
Most of the museums in the Netherlands have little or no interaction with their visitors. A traditional museum visit exists of watching art and reading some information about it [1], which includes no interaction at all. According to Campos et al. [1] museum visitors nowadays expect to learn while also having fun. Additionally, Abdelrahman et al. [2] argue that the main experience a visitor has of a museum is shaped by their behavior based on their interests and engagement in the museum.
Therefore interactive museum exhibits, where visitors can change a situation based on their own input and which supports social interactions sparks lots of interest from the visiting audience [3]. In conclusion, sources [1], [2], [3] agree with the fact that museums should implement more interactivity with their visitors.
Interaction is not the only aspect of a museum that could be improved. Another aspect is that generally, museums do not adapt to the times. With technology rising and the development of many technological applications within houses, schools and public spaces, museums cannot stay behind if they want to stay relevant for visitors [4] Technology provides lots of new possibilities to approach museum visitors [1] and museums should implement new technologies to enhance the experience of visitors. Additionally, Baraldi et al. [5] argue that museums lack an instrument that provides entertainment, instructions and visit personalization in an effective and natural way.
However, Campos et al. [1] argue that many art museums are struggling to implement innovative and technological approaches to engage the visitors in their exhibitions. Mainly because it is of great importance that the technological applications meld seamlessly in the museum environment [1]. In this way, visitors do not think of the application as an independent computer but as part of the museum exhibition.
Two smart technologies might support museums in their struggle for more interaction with their visitors: interactive screens and brain-computer interfaces (BCIs). These two technologies are further elaborated in the following paragraphs.
A company that creates interactive screens to provide engagement for users is VidiNexus. VidiNexus is the client of this research and a company placed in Hengelo [6].
Founders of VidiNexus are Jaap Reitsma and Maurice Markslag. The company provides interactive social screens for museums, events, offices, and shops [6]. The interactive screen is called EXPOmatic. It can show photos, videos, flyers, or social media on a continuous basis. It supports almost all popular social media and additionally, any document can be shared with the visitors through Dropbox [6]. The goal of VidiNexus is to enhance the visitors’ journey with the help of these interactive screens. VidiNexus already accomplished this for all different kinds of clients. For example; World Trade Center, Robeco, MuseumFabriek, and Historisch centrum Overijssel [6] The same interactive screens are used as an interaction tool in this research to provide an interaction between the visitors and the museum.
Figure 1: The EXPOmatic interactive screen of VidiNexus [6]
8 Another possibility of implementing interaction technology inside a museum to improve the visitors' engagement is by using Brain-Computer Interfaces (BCI) to provide personal and real-time feedback.
BCI’s are systems that can translate a user’s brain activity into commands for an interactive application or use the brain activity to alter the application [3]. For example, a system that can move a ball to the left or right of the screen with the input of the user only thinking of left or right-hand movements.
Furthermore, BCI systems are becoming more and more popular outside the medical sector. The scope is expanding and nowadays lots of other BCI applications are being developed to use in the entertainment industry [3], [7]. BCI systems could be valuable for this project because these systems are able to measure engagement levels to give real-time personal feedback, which could improve the interaction between the user and the museum. Additionally, the data a BCI system obtains could be used to measure, analyze and interpret visitors’ responses toward art. When these responses are known, improvements may be made to the exhibition. So, newly developed BCI systems may help improve the engagement and experience of users during their visit.
To conclude, with the growing interest in interaction technology between humans and computers and the rapid increase in the use of smart technologies, there is a great need for museums (amongst other things) to implement new technologies like BCI [3]. Therefore, this research focuses on implementing BCI together with the VidiNexus screen inside museums to improve the engagement and experience of visitors. The right balance between art, technology, and interaction needs to be researched.
1.2 Research questions
The purpose of this research is to find out what influences the experience of visitors in a museum and how this experience could be improved by using smart technologies, such as BCI systems and the interactive screens of VidiNexus. Therefore, the following main research question is defined:
How can a Brain-Computer Interface system in combination with interactive screens improve the engagement and experience of museum visitors?
This research question is answered by investigating multiple sub-questions. These sub-questions are divided into two topics; Museum & Art and BCI. This is done to separate the museum part of the research from the more technical part with BCI. When these topics are discussed separately, combinations of the main findings can be explored for the final part of the research, which consists of developing and testing a prototype system. The sub-questions can be found below.
Sub-questions about Museum & Art:
This part of the research focuses on the experiences, goals, and needs of the museum itself and the museum visitors. This provides insights about the aspects that could be improved, which are needed to answer the main research question.
- What are the main goals of an art museum and museum visitors?
- How do visitors experience an art museum and the art displayed?
- Which emotional responses have visitors towards art?
- What features can positively enhance the visitors’ experience in a museum?
Sub-questions about BCI:
This part of the research reflects the capabilities of a BCI system that can be used to answer the main
research question. Furthermore, methods need to be explored how BCI systems work and how it may
improve the experience and engagement inside a museum.
9 - How can one measure engagement using a BCI?
- What are the different steps that need to be taken to transform raw sensor data into useful commands?
- How can a BCI system be used to improve the interaction inside a museum?
Sub questions prototype:
At the end of the research, the aim is to have a prototype of an interactive system that improves the engagement and experience of visitors inside a museum. The prototype incorporates the best fitting BCI system and gives users real-time feedback. The prototype is tested, evaluated, and adjusted when deemed necessary. Experiences with the prototype are used to answer the main research question on improving the interaction between a museum and its visitors with smart technologies.
- What is the most promising concept using BCI and interactive screens to improve the user experience and engagement?
- What are the user experiences of the final concept?
1.3 Scope of the research
The research scope is defined by the smart technologies of the client company and the University of Twente. Furthermore, museums are selected and the timeline is presented.
As stated in the introduction, VidiNexus is a company in Hengelo that provides interactive screens for events, buildings, museums, and shops [6]. With their screens, VidiNexus wants to enhance the journey of the visitors [6]. During this research, a VidiNexus screen is available to implement in the final project.
Furthermore, there are limitations in the usage of BCI. Namely, this is limited to the sensors available at the University of Twente in the BCI test bed. Other sensors are not included in the research.
Additionally, the scope of this research is limited to public art museums in The Netherlands. The target group of this research is not yet determined. This is because it depends on the results of the state of the art research and the interviews with Dutch museums. Finally, this research needs to be done within 10 weeks. Therefore the scope of this research stays within the boundaries of possibilities of 10 weeks.
1.4 Methodology
To answer the main research question, the sub-questions within the topics need to be answered first.
The schematic overview displayed in Figure 2 shows which subtopics are answered in which chapter.
As can be seen, the State of the Art chapter includes literature research and interviews. This
information is used to get answers to the first two subtopics. The chapter’s ideation, specification,
realisation and evaluation are focused on the third topic; Prototyping. The information gathered in the
State of the Art is used to guide the creative process. Furthermore, the evaluation phase starts focusing
on the main research question and provides a proof of concept. Finally, the conclusion combines the
results of the part researches to get an overall conclusion on the main research question.
10
Figure 2: The methodology schematic of the entire research
1.5 Structure of the report
This research starts with State of the Art research in Chapter 2. This chapter focuses on the first two topics in this research and reflects the present state of scientific or engineering development.
Additionally, interviews with stakeholders are be covered. The information gathered during the State of the Art, is used in the third chapter; ‘Ideation’. This chapter focuses on brainstorming. The brainstorm ideas are further elaborated in the fourth chapter ‘Specification’ and the final concept is realised in the fifth chapter of the research ‘Realisation’. After the realisation of the prototype, user experience testing and evaluation of the results are discussed in Chapter 6. The conclusion of the research in Chapter 7 is based on the user testing results and the other findings during the research.
Finally, recommendations for further research are stated in Chapter 8. The appendices contain
intermediate results of the research.
11
Chapter 2: State of the art research
This chapter includes state of the art research, which reflects the present state of scientific or engineering development. Because this research has a lot of different aspects to it, the state of the art is divided into sections. Each section focuses on a more specific topic of the total research.
Insights for the state of the art research are collected in several ways. Firstly, insights are gathered by research in (literature) articles and books. These articles come from websites such as Scopus and Google Scholar. The books are borrowed from the library or accessed online via Scopus and Google Scholar. Additionally, websites, journals, and videos are used to get a wide view of the researches that are already done. Furthermore, interviews and surveys are conducted to get primary data about the needs of the users, museums, and the client. These methods add valuable extra information about underlying thoughts and specific topics within this research. The interviews and surveys are conducted after the Ethical Committee of the University of Twente had approved the protocol.
2.1 Museums & Art
This section is about the current status of museums and the perception of art. Museums can be classified into different categories. The three main categories are modern art museums, historical museums, and science museums (Interview, Section 2.5.1). The main goals of a museum are to inspire, entertain and educate visitors (Interview, Section 2.5.1). This section is about the way people experience art and about the level of interaction that is implemented inside a museum. These specific topics are interesting for this research because they both have a great impact on the visitors' experience inside a museum.
2.1.1 Visitor experiences of art
What is considered art differs with time and from person to person. According to Leder et al. [8] the borders of what was considered an artwork once and what is called art today are changing due to the introduction of video and online art. Despite these changes, the factors on how people respond towards art do not change [8]. Examples of these factors are the environment in which the art is presented or the personal affection for the artwork. This can have a huge impact on the experience people have of art [8]. These factors are explained in the following sections.
Environment
Brieber et al. [9] have done research towards the effects of the environment on the perception of art.
They have tested the same artworks in a laboratory environment and a museum environment. Brieber et al. [9] have discovered that art inside a real museum environment received a higher effect of aesthetic appreciation and visitors liked the artwork more. After doing more experiments, they have found that not only the environment matters but also the time spent looking at artworks. The longer visitors look at an item, the higher the art experience was. People inside the museum environment spend, by themselves, more time looking at artworks than the people inside the laboratory environment. This also contributed to a better experience of the art inside the museum environment [9].
Another research, done by Leder et al. [8], have the same results about environmental effects.
However, they have found that if the research is done in a laboratory but it is explicitly stated that the
visitors are participating in an experiment, the effects of the environment are smaller [8].
12 Aesthetics
The next factor that has an impact on the experiences of art is aesthetics. Leder et al. [8] have made an extensive model about the aesthetic experience of visitors inside a museum or gallery. This model is of relevance for this research because it shows different aspects that influence the experience of art inside a museum. These aspects need to be known, to be able to improve the museum experience.
This model can be seen in Figure 3.
Figure 3: A model of aesthetic experience by Leder et al. [8]
The main elements in the model are perceptual analyses, implicit memory integration, explicit classification, cognitive mastering, and evaluation. These elements can be found in the middle of Figure 3 from left to right. The first block is perceptual analyses. The perceptual analyses of an artwork for visitors happens quickly without any effort [8]. In this phase, visitors notice contrasts, complexity, symmetry, and order inside the artwork.
The second stage ‘implicit memory integration’, depends on memory effects of the visitor. Leder et al.
[8] state that the aesthetic preferences are affected by familiarity. A visitor is more attracted to the artwork if, for example, the artist is familiar with the visitor. Leder et al. [8] have done an experiment in which they connected a painting with the famous artist van Gogh. This led to positive correlations with the aesthetic judgement of the visitors. However, as soon as Leder et al. [8] introduced the painting as fakes, these positive correlations were strongly reduced. Canning [10] agrees with this effect. She states that the affect visitors have on art is depended on the connections that support the engagement of visitors [10]. To conclude this second stage, the aesthetic experience depends on familiarity for the visitor.
The third aspect of the model is explicit classification. At this stage, processing is particularly affected by the expertise and knowledge of the visitor [8]. The analyses of this stage are connected with content and style. Artworks have a positive effect on the visitor when the expertise about the artwork, the historical importance, or the artist increases. This is may be of importance for the graduation project because it means the experience can be higher if more information about the artwork is available.
The fourth and fifth aspect of the model is ‘cognitive mastering’ and ‘evaluation’. These two processes
are very closely connected to each other. According to Leder et al. [8] it is very important to understand
13 the features of art when art provides a need for interpretation. If this is done successfully, this is experienced as emotionally positive. Additionally, if the artwork is understand by the visitor, the result is that the rewarding centers of the brain are activated, which also improves the experience [8]. During the evaluation process, personal taste has an important role in affecting aesthetic experiences. Also, ambiguity is appointed by Leder et al. [8] as a measurement that influences the experiences during the evaluation process.
The outputs of the model are aesthetic judgement and aesthetic emotion. According to Leder et al. [8]
the aesthetic emotion depends on the success of information processing. For example, asking how pleasing an artwork was, is related to the aesthetic emotion. Aesthetic judgement is based on what someone thinks of the artwork [8]. This is closely related to aesthetic emotion, but the difference becomes more clear with the help of an example. When an experienced visitor decides that a painting is a poor example of a certain painter, the aesthetic judgement is low. However, the process of coming to that decision could still feel rewarding. The more experienced a visitor is in looking at art, the clearer the distinction between aesthetic judgement and aesthetic emotion can be made.
At the end of the research, Leder et al. [8] give some recommendations and disclaimers. Firstly, the model is not yet tested a lot. Therefore, Leder et al. [8] state that testing and refinement needs to be done to determine and improve the accuracy of the model. Furthermore, they argue that the outcomes (aesthetic emotion and judgement) diverge in different situations. Therefore, the interdependence between pleasure, interest, and affective judgements needs to be explored in future research [8].
However, Leder et al. [8] are not the only people who have researched this topic. A completely different approach to model aesthetic experiences is done by Konston et al. [11]. They have studied the brain responses to art with EEG to discover the basis of aesthetic experiences. A 32 channel EEG headset is used to discover different parts of the brain that are active during an aesthetic experience caused by artworks. To analyse the aesthetics of an art image, the approach of Konston et al. uses four features; luminance, texture, gradient, and composite features [11]. The luminance is analogous to the brightness of the artwork. The texture is a feature that examines the brushwork of the artist. The gradient maps the rate of change in intensities. The composite features are the grey levels of each segmented block inside the image. Unfortunately, the approach of Konston et al. [11] only reached an accuracy of 55%.
2.1.2 Visitor interactions inside museums
As also mentioned in the introduction, lots of museums in the Netherlands have little or no interaction with their visitors. A traditional museum visit exists of watching art and reading some information about it [1], which includes no interaction at all. Even when, according to Campos et al. [1] museum visitors nowadays expect to learn while also having fun. This is often not achieved if the museum has no or little interaction with its visitors.
Furthermore, Abdelrahman et al. [2] argue that the main experience a visitor has of a museum is shaped by their behavior based on their interests and engagement in the museum. Therefore interactive museum exhibits, where visitors can change a situation based on their own input and which supports social interactions sparks lots of interest from the visiting audience [3]. Additionally, Baraldi et al. [5] argue that museums lack an instrument that provides entertainment, instructions, and personalization effectively and naturally.
Campos et al. [1] have stated a possible reason for the lack of interactivity inside a museum. They argue
that many art museums are struggling to implement innovative and technological approaches to
engage the visitors in their exhibitions [1]. Mainly because it is of great importance that the
14 technological applications meld seamlessly in the museum environment [1] In this way visitors do not think of the application as an independent computer but as part of the museum exhibition. If the system is part of the museum exhibition, visitors are more tented to use and like the system. Because museums are struggling to meld the installation in the museum environment properly, they do not implement it at all.
Another difficulty, also stated by Campos et al. [1], is that interactive installations are difficult to
prototype or to test in the early prototype stages. This could be a difficulty because the lack of
prototyping and testing often results in a product of lesser quality [1]. So, when a museum does decide
to have an interactive system, but the system does not perform as it should (for example, by the lack
of prototyping and testing) the museum could decide to stop the interactive installation. For a
museum, one bad experience could even lead to postponing the implementation of interactive
installations again (Interview The MuseumFabriek, Section 2.5.1). These two difficulties may be part of
the reason why there is little interaction inside a museum.
15
2.2 Brain-Computer Interfaces (BCI)
In recent years, brain-computer interfaces (BCI) have become more and more popular [7]. By the common effort of scientists, communication between humans and computers through BCI systems is possible. BCI can be described as a system that translates a measure of a user’s brain activity into messages or commands for an interactive application [3]. This allows a BCI system to completely or partially replace devices like for example a computer mouse or computer keyboard [12].
BCI systems could be valuable for this project because these systems are able to give real-time personal feedback which could improve the interaction between the user and the museum. Additionally, the data a BCI system obtains could be used to measure, analyze and interpret visitors’ responses toward art. If these responses are known, improvements can be made. Therefore, newly developed BCI systems may help improve the engagement and experience of users during their visit. However, BCI has more to offer for a museum. BCI is a new and, for many people, exciting technology. Having an interactive BCI system inside a museum could also attract more visitors.
2.2.1 BCI systems
BCI’s can be invasive and non-invasive. Invasive means that it requires surgical placements of electrodes on, or in, the brain. These electrodes will record brain activity. Invasive BCI is (until now) only used for medical applications. Non-invasive BCI means that it does not require surgical placement of the electrodes. The sensors can be places on the skin itself, for example on the scalp. Non-invasive recordings can be divided into two methods; direct measurements and indirect measurements [3].
According to Nam et al. [3] direct measurements detect the electrical or magnetic activity of the brain, such as EEG. Indirect measures of the brain reflect the metabolism or hemodynamics of the brain. This does not directly characterize the neuronal activity [3].
In Figure 4 an overview of BCI sensor methods can be found. As can be seen in the figure, a distinction between direct and indirect methods is made. For this research, the focus is on direct methods, which can be seen on the left side of the graph. On the right side of Figure 4, the hardware complexity and price of direct and indirect methods are shown. As can be seen, EEG is classified as lowest complexity and price. Therefore, the main focus is on direct non-invasive EEG recording methods.
Figure 4: Different BCI measurement methods [3]
Even with the main focus on non-invasive EEG recording methods, there are still lots of choices to be
made. Prpa & Pasqueir [13] have made a more extensive overview of a basic BCI device to display these
choices. This overview shows, for example, the choices that need to be made about different types of
electrodes and types of output data. The overview can be seen in Figure 5.
16
Figure 5: Overview of BCI device choices [15]
As can be seen in Figure 5, the output data of a BCI system can be divided into three groups; raw data, brainwave data, and hybrid data [13]. According to Nijholt [14], these three groups can be used in an interactive application in two ways; control by affective state and issuing commands by brain signals.
Control by affective state uses the output data to increase or decrease the task load or information flow [14]. According to Nijholt, this could provide pleasant and effective feedback which keeps the user interested. The second way, issuing commands by brain signals, uses the output data to control the interactive application itself. For example, the mental state or emotion of the user can be used as input for the game [14].
2.2.2 BCI in the entertainment industry
The entertainment industry has opened a way to use BCI applications in the non-medical sector. When BCI is used in the entertainment sector, the robustness and efficiency that is needed in the medical sector are not always needed anymore [7]. This implies that efficiency is not the main goal or concern of BCI when implementing this in the entertainment industry. According to Nijholt [7] the goal switched more to affect, comfort, community, and playfulness.
More and more games with implemented BCI are being developed. For example, Bonnet et al. [15], created a multi-user video game called BrainArea. This game is a simple football game in which the users can play together or against each other. The users can score a goal by using the EEG attached to their heads. Furthermore, Nam et al. [3] have developed a Brainball game that intends to drops the stress level of users. The game can only be won if the users relax. In this way, the user will learn how to control stress while being in an amusing situation [16]. Another application that Nam et al. [3] have developed is AlphaWow. This game uses the EEG activity to detect the emotional state and stress levels of the user and adapts the avatar’s form accordingly. In short, EEG measurements can be used for interactive systems and can connect the level of stress and the emotional state of a person.
Next to making new applications to implement BCI, researchers are also combining existing games with
brain controlling functions, to provide a multi-brain entertainment experience [16]. For example,
Scherer et al. has developed the Graz-BCI game controller. This controller is able to connect any BCI
input (for example EEG Emotiv headset) to the online game; World of Warcraft [15]. This is important
information for this research because it shows that EEG methods can also be connected with already
17 existing interactive systems. This means that also already existing interactive systems can be used for the final system.
2.2.3 BCI to improve the user engagement in museums
Application to improve the user experience and engagement inside museums based on BCI already exists. For this research, it is important to know what is already done, what elements achieved positive effects, and how the applications improved the engagement of visitors. This section reviews already existing BCI applications and shows the advantages and disadvantages of these applications.
The first application that is discussed tries to improve the user engagement of visitors by adjusting specific features based on the visitors' preferences. Abbattista et al. [17] have used BCI to explore if an exhibition piece is of interest to a visitor. This is done with EEG signals captured by a low-cost EEG headset, which only uses one electrode and is developed by the company Neurosky. Despite that this causes the accuracy to be limited, the size and comfort of the low-cost EEG device allow for real-time assessment [17]. In the beginning, the user is asked to relax for 10 seconds, to create a baseline measurement. Then, the images of the artworks are shown for 10 seconds. After this, the user should fill in an evaluation form where the user was able to choose between three options, interested, neutral, or not interested. The evaluation form is used to measure the accuracy, which resulted in an average of 75%. Abbattista et al. [17] used the previously obtained data to study the feasibility of a real-time predicting system. The system chooses the next artwork based on the measurements during the previously showed artworks. When confirming the choices of the system with the opinions of the users, it matched 70% of the cases. In the end, Abbattista et al. state that the system has high potential to improve user engagement in a museum [17].
Similar research is done by Abdelrahman et al. [2]. The system Abdelrahman et al. [2] have developed gives real-time feedback and suggestions to the visitor to personalize the visitors’ experience. At the entrance of the museum, visitors were presented a selection of photographs showing exhibited items.
By the means of BCI, the level of engagement was measured. This was done with the Emotiv EPOC that the visitors wore during the questions. In the beginning, the visitor was asked to relax for 60 seconds with their eyes closed. This was done to create a baseline EEG signal, which was used for comparison.
After relaxation, the different exhibited items were shown for 20 seconds. Depending on the measured level of engagement per exhibited item, a starting point for a personalized museum tour was announced. Additionally, the visitor received extra feedback on their mobile phones recommending other exhibit items that may match the same engagement levels. To conclude, the main feature of the system of Abdelrahman et al. [2] was that visitors received a personal route through the museum based on engagement level measured via the Emotiv EPOC at the entrance of the museum. This system has a disadvantage that families or groups who together join the museum, can be separated because members have a different starting point [2].
The difference between the first application of Abbattista et al. [17] and the second application of Abdelrahman et al. [2] is that the first application only adjusts the visitors' preferences during the visit.
The second application already begins selecting preferences at the entrance via an interactive screen.
There are no results given about which procedure works the best. Furthermore, there are also some aspects on what the two sources agree on. They both agree that the application does not only help the visitor but is also very useful for the museums themselves. By knowing the preferences of the visitors, the museums get a better view of how to attract more visitors and are able to satisfy the needs of a diverse audience [2], [17].
Where Abdelrahman et al. [2] have used the BCI system mainly at the entrance of the museum, Banzi
and Fulgieri [18] used a BCI system at the end of a museum visit. They have done research on the
18 relation between the engagement level of visitors inside museums and previous received visual stimuli.
The participants were divided into two groups, one group that received specific visual stimuli before visiting the museum, while the other group did not receive visual stimuli. Both groups received the same questions afterward. While the participants answered the questions, the brainwaves of the participants were measured with the NeuroSky headset. Banzi and Fulgieri choose the EEG Neurosky headset instead of the Emotiv EPOC due to the easiness of positioning the device [18]. For evaluating the brainwaves, multiple MATLAB functions were used. At the end of the research, Bansi and Fulgieri concluded that the visual stimuli worked; the EEG measurements of the group that received the visual stimuli showed a higher attention and engagement level at the questions that the stimuli were about.
This is compared with the other group that did not receive any visual stimuli before the visit [18].
19
2.3 Electroencephalogram (EEG)
As stated in a previous section (Section 2.2.1), EEG can be a direct non-invasive recording method [3].
However, there are more aspects of EEG that need to be explored, before it can be used for this research. For example, for this research, it is important to gain insight into the EEG measurements and different available EEG systems. These two aspects are explained in this chapter.
2.3.1 EEG measurements
EEG measurements are objective measurements. This means that the measurements are based on how people perform a task [19]. Fahrenfort [19] states that this is irrespective of what they experience while performing the task. So, evaluating an interaction by asking users about the experience is a subjective measurement, while measuring or observing the interaction is an objective measurement.
Therefore, EEG devices deliver objective measurements.
There are multiple reasons why researchers argue that EEG is a good objective method for measuring emotions. To start, Liu & Fu [20] state that consciously manipulating the signals that EEG detects is almost impossible. Yang et al. [21] and Bidgoly et al. [22] agree with that. Additionally, they all argue that in comparison to other recognition techniques, such as facial expression, speech recognition, or body posture, EEG is the most reliable. Think of for example a ‘poker face’, in which almost no emotion can be seen. EEG is able to recognize the inner real emotional state of the participant [23] and is thereby able to see if the participant with a ‘poker face’ is actually for example sad. Furthermore, there are EEG systems that are inexpensive, portable, non-invasive, and easy to use [16], [24]. In addition, EEG can deliver a relatively high resolution which makes it useful to measure emotions [16]. Therefore, EEG systems are able to give a good and objective emotional observation.
Unfortunately, there are also some challenges while working with EEG. Understanding EEG measurements is not easy. The inspection of the brain signals requires practice and expertise [24].
Furthermore, inaccuracies could occur by artifacts. According to Kim & Andre [25], noise due to misplaced electrodes or not fully functional wires is one of the biggest artifacts when conducting EEG measurements. Other artifacts that could occur during EEG measurements are motion artifacts and muscle activity artifacts [26]. Bidgoly et al. [22] argue that wet electrodes, in contradiction to dry electrodes, are harder to attach, but also less sensitive to the motion and muscle artifacts. To conclude, the challenges of working with EEG should be taken into account and solutions should be explored further when own measurements are done.
2.3.2 EEG systems
An EEG system is able to detect brain activity. There are a lot of differences between different EEG systems. One of those differences is the way electrodes are placed on the human scalp, which is explained in the following paragraph.
Several approaches of placing the electrodes of EEG on a human scalp when measuring neural activity have been proposed. The first option is a standardized and international system; the 10-20 [23], [22]. Suhaimi et al. [23] and Bidgoly et al. [22] both state that these two numbers refer to the distances between the electrodes on the scull, placed at respectively 10% and 20% of the total longitude and latitude (See Figure 6). The second option is to expand the 10-20 system up to a higher amount of electrodes, to a maximum of 256 electrodes. Abdulkader et al. [16] argue that this is done to increase the resolution and the signal localization. Additionally, research done by Suhaimi et al. [23] points out that lower performance EEG
Figure 6: The 10-20 electrode EEG method [23]
20 with more electrodes could outperform a medical-grade EEG system with lesser electrodes. Therefore the amount of electrodes is connected to the performance and accuracy of the EEG measurements [23]. This is important information to know for this research because it can influence the choice between different EEG systems.
Next to placing electrodes on the scalp manually, there are also some other ways of detecting brainwaves with EEG. One of those methods is by doing an in-ear recording via a device that fits perfectly in the ear, like a music earpiece. These systems offer advantages like fixed electrode positions, user comfort, and ease of use [27]. This method has wireless data transmission via Bluetooth [27]. Unfortunately, this sensor is not available at the University of Twente. EEG systems that are available at the University of Twente are the Emotiv EPOC and the Muse headband, which are further explained in the following paragraphs.
The Emotiv EPOC is a cost-effective mobile EEG device with 16 sensors, which means that these sensors are able to measure the user’s brainwaves in 16 different places (See Figure 7) [28]. These places all have their own name consisting of a letter (sometimes two letters) and a number, which can be seen in Figure 8. The electrodes of the Emotiv EPOC need to be wet to measure at better accuracy. After the electrodes are made wet, the headset is easily put on the user's head and thereby only needs, on average, five minutes setup time [28].
Figure 7: The Emotiv headset [28] Figure 8: The Emotiv headset electrode names [28]
The Emotiv is able to transport its data via Bluetooth [28]. The output data can be seen on the Emotiv Pro app on a laptop, which displays raw data that is obtained by the electrodes. The data can be seen individually, divided into different frequency bands, or as performance metrics (engagement level, focus, excitement, and stress). The output of raw individual data can be seen in Figure 9.
Figure 9: Raw data of all electrode from the Emotiv displayed on the Emotiv app [28]
21 The next EEG headset is the Muse headband. The Muse is a multi-feedback EEG device built to support meditation [29]. The Muse headband transforms brain signals into real-time feedback. This feedback can be seen on the Muse app that can be downloaded on a smartphone. Real-time feedback is possible because the data that is obtained can be directly streamed via Bluetooth to other devices. The Muse has seven sensors in total; two on the forehead, two behind the ears, and three reference sensors [29].
According to Krigolson et al. [30] these sensors on the forehead and behind the ear are analogous to the electrodes located at AF7, AF8, TP9, and
TP10. The headband is easily put on the head and the electrodes do not need to be wet [29].
Figure 10 displays the first Muse headband.
However, a newer version is already available; the Muse 2. The Muse 2 looks a lot like the first Muse headband but has extra sensors and features. The new properties are able to provide more real- time feedback to the user. For example, body movement, heart rate, and breath [29].
2.3.2.1 Comparison of Emotiv EPOC and Muse headband
To be able to answer the (sub) research questions, it is important to compare the features of the Emotiv EPOC and the Muse headband, which is done in this section. The first advantage of both the Emotiv EPOC and the Muse headset is that they have wireless data transmission, which makes the devices flexible and portable [23].
The Emotiv and the Muse are both able to measure brainwaves and classify emotions afterward.
However, there are some significant differences between the two systems. First of all, the Emotiv has 16 sensors that measure brain activity [28]. The Muse headband only has seven [29]. Since the accuracy of the measured brain activity is linearly connected to the number of sensors [23], the Emotiv has higher accuracy.
Another important factor is the ease of usage. The sensors of the Emotiv are very sensitive [28].
This means that it requires precision to correctly set up the Emotiv, especially if the participant has longer or thicker hair. Additionally, the electrodes of the Emotiv must be made wet before usage [28]. This increases the ability to detect brain activity, however, it is not very practical inside a museum. The Muse, on the other hand, is very easy to set up and the sensor electrodes do not need to be wet, which could make the Muse more practical inside a museum.
The next factor that may differ between the Muse and the Emotiv is the number of artifacts. This may differ because the two systems have a different number of sensors and a different way of placing electrodes on the head. Exact figures about this are not given in the literature articles, which makes it harder to compare the two. Luckily, as stated in the previous section, computer algorithms and filters are able to filter artifacts from the signal. However, if these filters are enough to also filter the movement artifacts when walking through a museum, still need to be explored. To conclude, to be able to make concrete statements about this factor, testing needs to be done.
The last factor is the connection with a computer. Both the Muse and the Emotiv are wireless and
Figure 10: The MUSE headband with specifications [30]
22 transfer data via Bluetooth. The Emotiv can be easily connected with the Emotiv Pro app on a laptop, where the acquired brainwaves can be visualised and the data can be saved and shared.
The Muse is a little more difficult to connect. A separate app called BlueMuse [31] should be
downloaded in order to extract the raw data. This app does not display the collected brainwaves
but is able to send the data towards a MATLAB or Python script which can display the brainwaves.
23
2.4 Emotion recognition
Emotions play an important role in communication for human beings in daily life. Specifically, emotions affect decision-making and human interaction [23]. Additionally, recognizing and managing emotions is one of the most important skills one can have [23]. For human beings, recognizing emotions can be done with little or no effort. Nonetheless, developing a computer having this same skill is rather difficult [32]. However, with the growing interest in interaction technology between humans and computers and the quick increase in the use of smart technologies, there is a great need for emotion recognition software according to Zhang et al. [33]. This specific software is most useful in particular sectors such as; robotics, marketing, education, health care, and entertainment [32]. When machines are integrated with emotion recognition software and are being used in society, productivity and experience could be improved and the cost of expenditure will be reduced in many ways [23]. For this research, it interesting to explore emotion recognition software as it could help improve the experience and engagement inside a museum. Thus, methods should be explored to develop emotion recognition software.
Therefore, the purpose of the following sections is to review EEG-based emotion recognition methods.
The first section is relatively short and contains different emotion classification models. The second and largest section of this chapter focuses on the EEG emotion recognition methods. A broad variety of EEG methods is presented and explained. The third section of this chapter includes some extra information about the Emotiv and Muse headset connected with emotion recognition methods. This section is included because these specific devices are available at the University of Twente.
2.4.1 Emotion classification models
This section contains the explanation of two main emotion classification models. These models are important for this research because they are the basis of the emotion classification process and thereby decide how and when emotions fall in a certain category. There are two main emotion classification models; the basic emotion-based model and the dimensional-based model.
Firstly, the basic emotion-based model has multiple versions. According to Ekman’s model, there are six basic emotions and other emotions are reactions or byproducts from the basic emotions. These basic emotions are happiness, sadness, anger, fear, surprise, and disgust [23], [33], [34]. The second version of the basic emotion-based model is explained by Dzedzickis et al. [32]. They present ten basic emotions; anger, anticipation, disgust, fear, happiness, joy, love, sadness, surprise, and trust. With additionally 56 secondary emotions, that are less intense versions of the 10 basic emotions. Although those two basic emotions models do not completely agree on the basic emotions, they share the principle of several main emotions and other secondary emotions. However, according to Kim & Andre [25], there is a problem with this basic emotion-based model. They argue that humans can have difficulty expressing emotions in words and that the labels can be too restrictive and culturally dependent. Luckily, these limitations do not occur in the second emotion classification model based on dimensions.
The second emotion classify model classifies emotions dimensional based, which means it is based on
positions on a 2-D spatial model [23], [20], [25]. On the vertical axis, the level of arousal will be
measured, which represents the degree of excitement. This should be measured in the prefrontal
cortex [35]. On the horizontal axis, the level of valence will be measured, which reflects a scale from
unpleasant to pleasant (or positive and negative), as can be seen in Figure 11. To conclude, the two
classification models are both able to classify all emotions. Nonetheless, the dimensional-based model
could be interpreted as more useful, because it may not have the limitations of being too restrictive or
cultural dependent [25].
24
Figure 11: Basic emotions on the dimensional model [20]
2.4.2 EEG emotion recognition methods
Emotion recognition can be divided into two categories; recognition based on non-physiological signals and recognition based on physiological signals [21]. Non-physiological signals are for example voice tone, body language, and facial expressions. Physiological signals are for example brainwaves. This means that emotions are not only visible from the outside, but also neural activity can reflect emotions.
A lot of studies in the neurophysiological field have reported that there is a strong correlation between the measurements of electroencephalogram (EEG) and emotions [23]. In this section, EEG emotion recognition methods are explained.
The EEG emotion recognition process can start after the brainwave measurements are done. Classic EEG emotion recognition methods consists out of four steps; pre-processing, feature extraction, feature selection and classification [3], [25], [36]. An overview of these steps can be found in Figure 12. The pre-processing is displayed as the first part of the feature extraction and are thereby together in a group. Additionally, the last two steps (Feature selection and classification) are also displayed in the same group. The reason behind this is explained in the coming paragraphs.
The first step is pre-processing, which focuses mostly on artifact removal. This can be done in two ways: filtering and regression or separation/decomposing [26]. Pre-processing with filtering is supported by Nakisa et al. [36]; they use a band-pass Butterworth filter, to eliminate different emotion- related frequency bands. This same filter is used by Konston et al. [11] to remove DC shifts.
Furthermore, Konston et al. [11] have used a correlation method to identify the bad performing channels.
Figure 12: Classic emotion recognition method steps: [3]
The second step in emotion recognition is feature extraction, which consists out of three domains;
time domain, frequency domain, and the time-frequency domain. The goal of the time domain feature
25 extraction is to extract statistical parameters of the EEG signals as EEG features [20]. There are multiple statistical methods to extract those features; mean, standard deviation, first-order difference mean and second-order difference mean [20]. Moreover, Nakisa et al. [36] also add normalized first difference and normalized second difference to the statistical methods and argue that all the methods together easily classify basic emotions like joy, fear, and sadness. The second domain is the frequency domain. Frequencies and amplitude of EEG signals can provide clues about the physical or mental state of the person [22]. Different frequency (band) ranges reflect different emotional states. In Figure 13 these emotional states with their link to frequency ranges can be found.
Figure 13: Emotional states linked to their frequency ranges [12].
According to Liu and Fu [20], the frequency domain reflects human emotions better than the time domain because when a human emotional state changes, this is directly visible in the changes in amplitude and power of the frequencies. Furthermore, Nakisa et al. [36] also agree with the higher effectiveness of the frequency domain instead of the time domain. The last domain is the time- frequency domain. This domain captures both, nonstationary and time-varying signals, which can add valuable additional information [36]. According to Islam et al. [26] the time-frequency domain is popular because it can capture any changes in frequency values, due to for example artifacts, in any window of time. Additionally, there are extra tools that can help find the right balance between the time and frequency resolution, for example, Liu and Fu [20] used the Wavelet transform for this.
Overall, if a domain should be chosen, the time-frequency domain includes basic functions of both the other domains and is thereby the most useful domain [26], [36].
The last two steps of emotion recognition are feature selection and classification. Many methods combine these two last steps, only Nakisa et al. [36] clearly distinguish the two steps. Nakisa et al. [36]
use an Evolutionary Computation algorithm to select the most relevant features from the EEG signal, before going to the classification of emotions. Eventually, this method delivers very promising results.
However, this method does require the two separate steps, where the Bayes classifier, a weighted-
log-posterior function, can do these steps simultaneously [37]. This method can reach an accuracy level
of 70.9% [37]. Another classification method is the Support Vector Machine (SVM). According to
Bidgoly et al. [22] this method uses a hyper-plane to separate the features into different binary classes
and can be used in the time and frequency domain. The SVM can be trained for better emotion
recognition results in terms of accuracy rate [20]. Lastly, the Hidden Markov Model (HMM) can be used
for signals in the time-frequency domain. This model can be defined as a set of hidden states with
probabilities from state to state [22]. To conclude, the best fit for feature selection and classification
depends on the domain (time, frequency, or time-frequency domain) of the feature extraction.
26 Lastly, Nam et al. [3] have made a practical overview of emotion recognition methods. As can be seen, the four steps mentioned above are put into the middle part of the overview; Signal processing.
Additionally, Nam et al. [3] also have inserted signal acquisition before signal processing and feedback after signal processing. This can be seen in Figure 14.
Figure 14: Brain-Computer Interfaces overview and order of method [3]