• No results found

Spatial Cognition in Synthetic Environments

N/A
N/A
Protected

Academic year: 2021

Share "Spatial Cognition in Synthetic Environments"

Copied!
147
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)
(2)

SPATIAL COGNITION

IN

SYNTHETIC ENVIRONMENTS

DISSERTATION

to obtain

the degree of doctor at the University of Twente, on the authority of the rector magnificus,

prof. dr. H. Brinksma,

on account of the decision of the graduation committee to be publicly defended

on Friday the 27th of May 2011 at 12.45

by

Frank Meijer

Born on the 21st of November 1978 in Alkmaar, The Netherlands

(3)

This dissertation has been approved by the promotor: Prof. dr. ing. W.B. Verwey

and the assistant promotor: Dr. E.L. van den Broek

ISBN: 978-90-365-3178-8

! 2011, Frank Meijer, All rights reserved.

Cover: Frank Meijer

(4)
(5)
(6)

SPATIAL COGNITION

IN

SYNTHETIC ENVIRONMENTS

(7)

Doctoral committee

Chair: Prof. dr. H. W. A. M. Coonen Promotor:

Assistant promotor:

Prof. dr. ing. W. B. Verwey Dr. E. L. Van den Broek

Members: Prof. dr. H. De Ridder Prof. dr. A. O. Eggers Prof. dr. M. A. Neerincx Prof. dr. J. M. C. Schraagen Prof. dr. ir. F. J. A. M. Van Houten Dr. J. S. M. Vergeest

The author gratefully acknowledges the support of the Dutch Innovation Oriented Research Program “Integrated Product Creation and Realization (IOP-IPCR)” of the Dutch Ministry of Economic Affairs.

(8)

TABLE OF CONTENTS

1. General Introduction 1

Part I: Basic studies

2. Representing 3D virtual objects: Interaction between visuo-spatial ability and type of exploration

33

3. Active exploration improves the quality of the

representation of virtual 3D objects in visual memory

53

Part II: Case studies

4. Navigating through virtual environments: Visual realism improves spatial cognition

87

5. Synthetic environments as a visualization method for product design 101 6. Conclusions 123 Summary 130 Nederlandstalige samenvatting 132 Dankwoord 135

(9)
(10)

Chapter

1

(11)

2

INTRODUCTION

One of the many challenges a design team faces during the design process of products is reaching consensus. Often, not only designers amongst each other have to reach an agreement on product specifications, they also have to take into account other stakeholders who are involved in the design process, such as managers, customers, and end-users (Lawson, 2005). This is not an easy task because different stakeholders often have different ideas about product design. For example, product designers emphasize the importance of creativity, managers strive for efficiency of the production process and marketability, and end-users expect a maximum of user comfort in the design. The different perspectives on how a product should be designed can lead to miscommunication among these stakeholders, thus obstructing the design process. Not surprisingly, much effort is put into developing new technologies to support communication between stakeholders and to test the effectiveness of the different product ideas (Wang et al., 2002).

In the past decades, Virtual Reality (VR) has become a powerful tool to support communication and collaboration between the different stakeholders in the design process (Bruno & Muzzupappa, 2010). VR refers to a computer-mediated environment, which is presented in such way that people have the illusion of participating in it rather than observing it externally (Earnshaw, Gigante, & Jones, 1993). The term Virtual Environment (VE) is often interchangeably used with VR and refers to the same concept. VR is already applied in a large number of professional fields, such as medicine, military, and product design (Burdea & Coiffet, 2003). In these professional fields, VR is mostly used to support education, improve training of skills, and enhance human communication. For product design, a product can be simulated in VR so it can be evaluated. Stakeholders are able to perform realistic interactions with a virtual product, which will not only increase their insight in the product’s practical options and limitations, but also stimulate them to discuss their ideas with each other (Landman, Van den Broek, & Gieskes, 2009). This will improve the effectiveness of the selected design solutions and eventually will increase the final product’s chance for success when it enters the market.

(12)

The current dissertation investigates the influence of VR on the stakeholders’ understanding of design issues. The goal of the present research is to determine the constraints for an efficient and effective use of VR for product design. For this purpose the effect of various aspects of VR on human cognitive processing is investigated. In particular, the influence of two typical aspects of VR on cognition will be the topic of research: interaction and visualization. This introductory chapter will first introduce the problem and solution space paradigm in product design and explain how VR can aid this process. Next, the concept of Synthetic Environment (SE) will be introduced and its application to product design will be explained. Subsequently, important factors that designers have to take into account when using an SE for product design will be discussed. Next, the importance of human cognitive processing for the implementation of SEs is explained. Finally, an overview of the remaining dissertation will be provided.

PROBLEMS AND SOLUTIONS IN PRODUCT DESIGN

Product design is about solving problems. Whether the problem is to find the right materials or to invent a smart mechanism, the designer’s task is to come up with clever solutions. These design problems are often roughly described at the beginning of a product’s design process. As the design process advances these problems become more precise, alter, and may lead to new problems. Therefore, designers constantly have to redefine the design problems and come up with new solutions during a design process. This section will briefly describe the different stages of product design and will focus on the process of solving design problems. Subsequently, a possible role of VR in the design process will be discussed.

In general, product design processes are divided into several design stages, in which designers perform a number of design activities (Cross, 2000). These stages of product design are illustrated in the model of French (1985). This is shown in Figure 1. The first design activity is to analyze the problems that have to be solved to fulfill specific needs from the market. Analyzing the design problems is an important part of the overall design process, which results in a formal statement of the design problem. According to French this statement

(13)

4

comprises three elements: a description of the design problems, the limitations of the design solutions, and the quality of the product’s design to be achieved. These elements determine the goals, the constraints, and the criteria to be worked with in the design process. The second design activity is to generate solutions in the form of concepts on basis of the stated design problem. This is referred to as conceptual design. During conceptual design, different types of knowledge are brought together, such as engineering science, practical knowledge, production methods, and commercial aspects. In this phase there is feedback to the problem analysis phase to evaluate to what extent a concept actually solves the initial design problems. The third activity is to work out the concept in greater detail and, if there is more than one concept to make a final choice between them. This is referred to as the embodiment of concepts. In this phase there is feedback to the conceptual design phase to evaluate whether concepts are effective and feasible. The last design activity is to decide about the remaining number of small but essential design issues. This is referred to as detailing of the design. The final output of the design process is a technical drawing or CAD (i.e., Computer Aided Design) model, which is used for manufacturing the product.

(14)

Figure 1. French’s model of the design process (1985). Design activities are illustrated in the squared boxes, whereas the rounded boxes depict the design stages.

For an effective and efficient design process, it is essential that the designers do not overlook important aspects of the design problems and that they find suitable solutions (Cross, 2000). There is a number of prescriptive models that describe a systematic procedure to find design solutions. In general, this procedure starts with analyzing and understanding the overall design problem as fully as possible. Next, the problem is divided into sub-problems, to which suitable solutions have

(15)

6

to be found. Finally, the various solutions found are combined into an overall solution. Although this suggests that there is a logical progression from problem to sub-problems and from sub-solutions to solution, Cross (2000) argued that there is also a direct relationship between the overall problems and solutions as well. Problems and solutions are explored and developed concurrently. This is illustrated in Figure 2.

Figure 2. The symmetrical problem/solution model of Cross (2000).

The interdependency between design problems and solutions is also described by Maher and Poon (1996), who stressed the development of problem and solution over time. According to Maher and Poon, product design is divided into two separate “spaces”: the

problem space and the solution space. Problem space refers to the

potential design problems in a design process as defined by the designers, who are creating a new product. Solution space refers to the potential solutions that designers generate to overcome these problems. Usually, designers jump between problem space and solution space more than once, because a certain design solution refocuses the original problem such that new solutions are often needed. In other words, design problems and their solutions evolve during product design. This process is referred to as co-evolution of problem-solution. Figure 3 shows the co-evolution model of Maher and Poon. In this model, the downward arrows represent the process of finding solutions to certain design problems at different moments in time. The upward

(16)

arrow represents the process of refocusing the original design problem when a solution is found. The horizontal arrows represent the evolutionary process of the design problems and solutions over time.

Figure 3. The co-evolution model of Maher and Poon (1996) illustrating the relationship between problem and solution space in the design process. Note that (t) stands for a given moment in time, (t+1) for a later moment.

The process of solving design problems is particularly difficult when designers have to develop complex products, operate in multi-disciplinary design teams, or experience extreme pressure to create products within a limited time (Cross, 2000). Various methods lie at hand to improve the designers’ ability to analyze design problems and to find suitable solutions. One method that has proved to be helpful is the use of design scenarios; i.e., explicit descriptions of hypothetical events concerning a product during a certain phase of its lifecycle (Tideman, Van der Voort, & Van Houten, 2008). A design scenario can be a written or spoken storyline, but it also may be visualized by for example images, movies, or animations. In addition, a scenario can

(17)

8

contain product prototypes with which people can interact in simulated environments. These prototypes and environments can be physical, virtual, or augmented (i.e., combination of both physical and virtual). Design scenarios enable designers to explore all kinds of practical design issues and test the effectiveness of the solutions found, which further increases the designer’s ability to uncover new design problems and find more suitable solutions (Tideman, Van der Voort, & Van Houten, 2008). With the recent technological development of computer systems, designers are able to implement scenarios in VEs so that prototypes can be tested in their future context in the earliest stages of the design process. The next section will further discuss how design problems and solutions can be made explicit by using VR technology.

SYNTHETIC ENVIRONMENTS

Currently, companies operate in an increasingly demanding environment to create new products (Burdea & Coiffet, 2003). The demand for innovation has increased and the average life cycle of products decreased. Furthermore, products have become more and more complex and an increasingly large number of people is involved in designing and building them. Therefore, companies feel the necessity to use new methods and technologies to make their product design processes more effective and efficient. One technology that has proved to be particularly useful for product design is VR (Chitescu et al., 2003). VR can be used throughout the entire product design process. A number of studies showed a successful application of VR to rapidly test prototypes, support production planning, and evaluate the effectiveness of manufacturing processes (e.g., Dangelmaier et al., 2005; Mujber, Szecsi, & Hashmi, 2004; Weyrich & Drews, 1999). Until recently, VR typically demanded expensive technology and extensive human expertise (Burdea & Coiffet, 2003). As a result, the use of VR was mostly confined to larger companies that could afford this type of technology and expertise. With current technological developments, however, VR has become within reach of medium and small companies as well. In this dissertation, a specific use of VR for product design is proposed: VR including affordable technology that is applied in the

(18)

earliest stages of the product design process. This specific use of VR will be referred to as the use of Synthetic Environments (SEs).

In the literature, SEs are generally defined as artificial environments in which people can interact with (virtual) prototypes as if they were in a real environment (Innocentit & Pollinit 1999; Kalawsky, 1999; Lu & Conger, 2007; Ma & Kaber, 2006; Ressler et al., 2001; Rudolph, 2007). Often, SEs are realized by VR technology. However, an SE may contain not only virtual elements, but physical elements as well (Wang et al., 2008). In this dissertation, the term SEs will be used specifically in relation to the first stages of product design: problem analysis and conceptual design. In an SE, a product prototype can be simulated together with its possible future environment. Accordingly, a scenario can be created, in which designers can test their concepts on for example usability issues. Figure 4 shows a model that illustrates how design problems and solutions are applied to SEs. After a design team has defined the initial design problems, these are applied to an environment (either physical or virtual). This process is represented in the model by the left downward arrow. Next, when design solutions are found, these are applied to a prototype (physical or virtual). This process is represented in the model by the right downward arrow. After the SE is realized, designers are able to test the prototype in an environment. The experience in the SE is then used to refocus the original design problems. This is represented in the model by the upward arrow. Because designers have experienced the practical implications of their design solutions, they increase their ability to refocus design problems. Applying problems and solutions to SEs is an iterative process, so that the SE is adjusted repeatedly to the designers’ needs and interests as the design process advances. This process is represented in the model by the curved arrows.

(19)

10

Figure 4. Adaptation of the co-evolution model of Maher and Poon (1996), in which design problems and solutions are implemented in an SE.

Using SEs does not only support designers to refocus design problems, but there are a number of other benefits as well. First, making the problems with their solutions explicit in SEs allows all stakeholders in a design process to experience a virtual prototype’s practical use in a similar manner. This experience stimulates the stakeholders to form similar ideas about the product despite their different backgrounds. In other words, SEs facilitate stakeholders to form shared mental models (Landman, Van den Broek, & Gieskes, 2009). A shared mental model will prevent miscommunication and motivates stakeholders to discuss the product’s potential with each other. This will increase the number of creative solutions and, eventually, lead to a better product. Second, SEs make it possible to involve end-users at an early stage of the design process when important decisions still have to be made (i.e., the fuzzy front end of design; Smulders, Van den Broek, & Van der Voort, 2007). Usually, the design team invites end-users in the final stages of the

(20)

design process to test the usability of a product. At this moment, most of the product’s design is almost definite and it is difficult to change the design according to the end-users’ feedback. By proactively involving the end-users in the design process it is more likely that the final product will correspond to their expectations and needs (Tideman, Van der Voort, & Van Houten, 2008). Third, SEs enable designers to make changes in their design that can immediately be evaluated. Design modification tools can be implemented in SEs, with which designers can intuitively change a product’s design and immediately test its effectiveness (Wang et al., 2008). This will increase the number of iterations in the design process and decrease the time to create effective product solutions.

In conclusion, an SE can improve the design process because it supports designers to refocus design problems, it increases communication between the stakeholders, it involves end-users at an early stage, and it facilitates fast design modifications. After a design team decides to use a SE in their design process, the next step is to actually implement the SE. There are various important factors that designers have to take into account when implementing an SE for product design. These factors will be discussed below.

IMPLEMENTATION OF SYNTHETIC ENVIRONMENTS

When a design team decides to use an SE, the main question they probably will ask is how to implement it. There are a number of factors that designers have to take into account when implementing an SE, such as the technology, the time, and the budget available to develop an SE. Another important factor is which people will interact with the SE; i.e., the users of the SE. Designers can involve different types of users, such as mechanical engineers, artists, managers, and product end-users. It makes a large difference what kind of user will interact with the SE. For instance, end-users will establish other design problems than expert-designers. The first group will probably identify usability issues, whereas the latter will identify mechanical issues. These user groups require different types of SEs. End-users probably need an SE that contains a high level of realism whereas expert-designers need an SE that is easily changed according to their design solutions. Thus, the

(21)

12

potential users play an essential role in the implementation of an SE. When an SE is used, the designers’ set of tasks in a design process changes (Tideman, Van der Voort, & Van Houten, 2008). With SEs designers acquire a number of additional tasks.

The first two tasks concern the implementation and evaluation of the SE. After the product designers have specified their needs and wishes, the SE developers start implementing these in the SE. Based on the requirements of the product designers and the restrictions of the design process, they implement a) the virtual aspects, such as 3D models, b) the physical aspects, such as display and interaction devices, and c) the simulation model, such as programming code that describes how the SE reacts to the users’ interactions. During the design process, the design team repeatedly evaluates whether the design problems and solutions are sufficiently represented in the SE. If design problems remain unresolved or new ones arise, then the SE needs to be redefined and the technical configuration of the SE modified. Furthermore, these evaluations help the design team to improve the quality of the SE as the design process advances.

The next two tasks of the design team concern the supervision and evaluation of the users of the SE. A design team starts to determine what kind of user to involve. To select the users, designers look at the type of design problems they want to solve. For example, if a design team wants to assess the usability of a product, they should invite end-users for a test session in an SE. However, if they want to test whether the product is safe, they should invite safety experts. When the users are selected, designers decide how to give them supervision during the test session. Supervising the users is important to ensure that they provide useful feedback about the product’s design. Before a test session starts in an SE, designers give instructions about the purpose of the SE and about their tasks. During the session, feedback may be given about the users’ actions and discussions held afterwards about the users’ experience with the SE. Furthermore, the personal characteristics of the users, such as gender, age, nationality or computer experience, potentially have a large influence on the interactions with the SE (for a review see; Chen, Czerwinski, & Macredie, 2000). Designers need to take these characteristics into

(22)

account to be able to understand the users’ interactions more accurately.

The last task of a design team is to evaluate the users’ interactions with the SE. This evaluation is used to assess the effectiveness of the designers’ solutions and to refocus the design problems. The evaluation of user interactions can be done with various methods (for an overview see; Preece, Rogers, & Sharp, 2004). One method the designers can use is to ask the users directly for their subjective experience with the product. This can be done with an interview or questionnaire. Both of these measures provide the designer with qualitative information about the product. Another method that can be used is to observe the users by documenting or recording their actions. This method is more laborious, but provides rich information about the users’ interactions. Finally, feedback can be acquired by measuring task performance in an experimental setting (Loomis, Blascovich, & Beall, 1999). With this method, the SE becomes an experimental environment. The feedback gathered, such as time to complete task or accuracy rate, provides the designer with quantitative information. This type of information is precise and suffers less from the fact that people may in practice behave in another way than when you would ask them about their opinions. Designers select one method or a combination of these methods depending not only on their needs, but also on the time, expertise and technology available in the design process.

In summary, designers implement the technical configuration of the SE, evaluate it, supervise potential users in the SE, evaluate their characteristics, and evaluate the users’ interactions with the SE. It is essential that the technology of an SE matches the users who interact with it. In case of a mismatch, users will experience their interactions with an SE as unrealistic. This could impair the quality of their feedback about the product’s design. To facilitate effective and realistic user interactions, designers are not only required to understand the possibilities and limitations from a technological perspective but also from a user’s perspective. Designers should understand the relationship between the technology used in the SE and the users’ capacity to process the information presented by the SE. Below, we will focus on how users process information from their environment, referred to as

(23)

14

human cognitive processing, and the implications it may have for the implementation of SEs.

A COGNITIVE APPROACH TO SYNTHETIC ENVIRONMENTS

To understand the effect of the technology used in an SE on the users’ behavior and opinions, it is essential that designers understand how the users process information from their environment. There are various distinct cognitive processes that determine for example how people reason, remember, make decisions, and perform actions (Ashcraft & Radvansky, 2009). The most notable cognitive processes are visual, auditory, and haptic perception. However, processes like attention, memory, and spatial cognition are important as well. Each of these cognitive processes imposes constraints on the implementation of SEs. The current section will give a brief introduction of some prominent processes that are assumed to be relevant for the implementation of SEs. Furthermore, some notable constraints for SEs are discussed in relation to these cognitive processes.

Visual perception

The first stage of human information processing is the perception of visual, auditory and/or haptic information. The most dominant of these perceptual systems is visual perception, which therefore imposes the most important constraints on the implementation of SEs. Because people normally rely so much on their ability to see, an accurate presentation of visual information is crucial for realistic user interactions in SEs. The human visual system is sensitive to small variations in color, shape, and lighting in the environment. Users experience an SE as more realistic when it is presented with more visual detail in accordance with a natural environment. This is referred to as visual realism (Slater et al., 2009). Visual realism can be achieved by providing more detailed 3D models, textures, natural colors, shadows, and reflections (for further reading see; Möller, Haines, & Hoffman, 2008).

An important characteristic of visual perception is the ability to perceive depth. Depth perception arises from a variety of visual cues,

(24)

which are generally divided into monocular and binocular cues. Monocular cues require the visual input from only one eye and are derived from parallel lines that converge towards the horizon, and decreasing motion, size, texture gradient, luminosity, contrast, saturation, and shadows of objects at greater distances (Wickens, 2003). Binocular cues provide depth information from using both eyes. On each retina of the eye an image is projected from a slightly different angle. If an object is far away the disparity between these images is small, if it is close the disparity is large. This disparity is used to estimate distances towards objects in an environment. Binocular cues can be provided in the SE by using stereoscopic displays (for further reading see; Javidi, Okano, & Son, 2009). Depth perception improves the users’ experience of realism in the SEs (IJsselsteijn et al., 2001). Furthermore, binocular cues are mainly used to estimate distances inside the space directly surrounding the user, which is called personal space (Cutting & Vishton, 1995). Therefore, binocular cues are important when interacting with objects in the users’ personal space is an essential part of the task.

Auditory perception

Another important sensory system is auditory perception: the sense of hearing. The auditory system identifies a sound by specific qualities of the sound waves and how the sound waves develop over time (Wickens, 2003). Sounds can be implemented in an SE by using recorded sounds (i.e., sound samples), but can also be “engineered” with a synthesizer (for further reading see; Viers, 2008). The latter requires more expertise, but also provides more control over the specific characteristics of the sounds. In general, sounds provide users with feedback of their interactions in an SE, which makes these interactions more realistic (Kim, 2005). Furthermore, adding sounds in a SE increases the users’ experience of realism (Stanney et al., 2003). Sounds that correspond with a natural environment will often further improve this experience. However, in some cases exaggeration of sound effects can also increase the users’ subjective experience of realism in SEs (Cohen & Wenzel, 1995). This is widely applied by the film industry and their principles may well be adopted for the

(25)

16

development of SEs. For example, exaggerating the intensity of low frequencies can increase the users’ sensation of being physically close to large objects moving through the SE.

Haptic perception

A third important perceptual system is haptic perception. Haptic perception is important for people to receive feedback about their physical interactions with the immediate environment. Haptic perception includes proprioception and kinesthetics (Wickens, 2003). Proprioception refers to the sense of the limb positions. Some differentiate the kinesthetic sense from proprioception by excluding the sense of equilibrium or balance from kinesthesia. Proprioceptive and kinesthetic feedback can be simulated in an SE by implementing haptic devices, such as a Phantom Device or Haptic Master (for further reading see; Burdea, 1996; McLaughin, Hespanha, & Sukhatme, 2002). The use of these haptic devices is relatively expensive and the implementation of this type of feedback is complex. Therefore, designers may be tempted to ignore haptic feedback in the SE. However, the absence of haptic feedback can have a negative impact on the users’ interactions with an SE (Robles-De-La-Torre, 2006). Providing haptic feedback in SEs can be important for a number of purposes. First, haptic feedback enables users to feel their physical movements and positions in relation to the SE, which facilitates more realistic interactions. Second, haptic feedback enables users to perform precise and fast movements, which is important to manipulate for example an object’s shape in the SE. Third, haptic feedback enables users to learn motor skills in the SE, which is important for designers when they want to assess the users’ capacity to learn how to operate products.

Multimodal perception

Multimodal perception occurs when perceived information from different modalities, such as visual, auditory, or haptic information, is integrated into a unified perceptual experience. Implementing and synchronizing different modalities in an SE requires additional effort and often more

(26)

than one computer is used to present the modalities (for further reading see; Popescu, Burdea, & Treffiz, 2002). One of the most important benefits of multimodal presentation of information in SEs is that it increases the users’ experience of realism (Slater et al., 1999). However, if multimodal SEs are used, then the coherence between the modalities becomes crucial. When users detect a (e.g., temporal) mismatch between the modalities they will experience the SE as less realistic (Stanney et al., 2003). A mismatch between the modalities can have serious implications for the users’ interactions with an SE. For example, including large visual displays in SEs without providing realistic physical feedback can cause cyber-sickness. When the users see that they are moving through an SE but do not feel it, they may experience dizziness, fatigue or nausea (Benson, 2002; Bonato & Bubka, 2004). With large visual displays, the visual cues are stronger and the difference between what users see and feel increases.

Attention

Attention is the cognitive process of selectively concentrating on certain aspects of an environment while ignoring other aspects. Attention plays a central role in human cognition, controlling what people see, hear, or feel, and what to remember (for a review see; Pashler, 1995). Attention is typically divided into bottom-up and top-down selection of information. Bottom-up guided attention refers to the situation when attention is automatically captured by salient information from an environment. Top-down guided attention refers to the situation when attention is driven by the individual’s inner goals or expectations. People tend to search for information where they expect to find it, but also detect information more rapidly that they beforehand judge as valuable (Wickens, 2003). For the implementation of SEs the following issues concerning attention should be taken into account. First, presentation of salient but irrelevant information in an SE, such as too much visual detail or background noise, may divert attention. Second, clear instructions and task descriptions are important to guide the users’ attention to the relevant information, particularly in a large-scale or complex SE. Third, people are able to keep a high level of attention for only a limited amount of time, depending on factors like task difficulty, an individual’s motivation,

(27)

18

and ability (DeGangi & Porges, 1991). Sustained attention can be stimulated in an SE by adopting techniques that are also used in computer games, such as providing incentives (e.g., game points), competitiveness with other people, and storylines (for further reading see; Adams, 2009).

Memory

After information is perceived, it is further processed and stored in memory. Human memory refers to the ability to store, to retain and recall perceived information at a later moment in time. Human memory is traditionally divided into sensory memory, short-term memory and long-term memory (Baddeley, 1990). Whereas sensory memory and short-term memory have a limited storage capacity and are temporary, long-term memory can hold large amounts of information up to several decades. Information in long-term memory is structured according to specific patterns in so-called mental models. A mental model is a coherent idea about something in the surrounding environment; about what it is, how it works, and how it is organized (Johnson-Laird, 1980). In SEs, users build up mental models about the product’s design, its functionality, and how to operate it. The accuracy of the users’ mental model of a certain product affects their feedback about a product’s design to the designers. Furthermore, the extent to which various users have comparable mental models determines, at least partly, their ability to communicate and the quality of the discussions about a product’s design (Landman, Van den Broek, & Gieskes, 2009). Various methods can be used to support the development of accurate mental models in an SE, such as emphasizing important design aspects with for example a distinctive color, or using clear instructions to guide users towards important information (for further reading see; Wickens, 2003). Finally, as not all knowledge in memory is accessible to consciousness, products requiring excessive human-machine interaction should be tested by assessing performance rather than asking for (conscious) opinions.

(28)

Spatial Cognition

Another relevant aspect of cognitive processing for the implementation of SEs is spatial cognition. Spatial cognition is concerned with the acquisition, organization, utilization, and revision of knowledge about the relative locations in and attributes of a spatial environment (Hart & Moore, 1973). This knowledge is stored in memory mostly in the form of mental maps or 3D object representations. Cognitive maps are used to navigate through large-scale environments (Spence, 1999), whereas object representations are used to recognize and manipulate objects (Peissig & Tarr, 2007). Users often experience difficulty to maintain knowledge of their location and orientation while navigating through an SE (Chen & Stanney, 1999). They easily get lost. Therefore, users may devote much of their attention trying to find out the spatial layout of an SE, which distracts them from their main tasks in the SE. To improve the users’ ability to navigate and to construct mental maps, the interaction methods (e.g., interaction devices) should allow the users to move effortlessly through an SE so they can obtain as much views of the SE as possible (Bowman, 1999). Navigation and the construction of mental maps can be further improved by implementing landmarks (i.e., easily recognizable objects and structures) (Darken & Sibert, 1996a, b), paths with a clear structure (Charitos & Rutherford, 1996) or assistants offering advice and suggestions (Van Dijk et al., 2003). Object manipulation (i.e., the ability to reposition, reorient, or query objects) has a profound impact on the efficiency and effectiveness of interactions in SEs as well (Stanney et al., 2003). The users’ ability to freely manipulate objects improves spatial processing of objects (James, Humphrey, & Goodale, 2001). The users’ ability to realistically interact with objects can be improved by providing haptic feedback (Richard et al., 1996). Spatial processing of objects can be further improved by providing stereopsis in the SE (Luursema et al., 2006).

Conclusion

For the implementation of SEs it is important that designers understand the relationship between the technology used and human cognitive processing. If the technology matches the users operating it, then the feedback from user opinions and behavior relevant for a product’s

(29)

20

design will be more reliable and valid. Some technologies are relatively easily implemented in the SE, while others are complex and require extensive human expertise. Cognitive processes such as visual, auditory, haptic perception, and memory impose important implications for the implementation of SEs. In relation to the implication of these cognitive processes, two typical characteristics of SEs are important for SE implementation: the level of interactivity and the actual (visual) realism. These two characteristics distinguish an SE from for example a technical drawing or an animation. For the implementation of SEs, designers have to determine how much interactivity and realism they should provide to obtain valid and reliable feedback from the users. In general, interactivity and realism improve the users’ subjective sense of realism (Slater, 1999) and cognitive processing of, mainly, spatial information (Mania et al., 2006). However, the latter effect may vary across different groups of people (Cornoldi & Vecchi, 2003). Therefore, to determine the level of interactivity and realism in an SE, more knowledge is needed about the relationship between these characteristics and spatial cognition.

OVERVIEW OF THE DISSERTATION

The current dissertation is the result from a collaborative project with academic partners from industrial design and companies interested in the potential of (low fidelity) SEs for product design. The studies in this dissertation focus on the influence of two typical characteristics of SEs on human cognition: interactivity and visual realism. Implementing interactivity and visual realism in SEs takes a substantial amount of expertise and effort. Therefore, it is important to determine if and how users benefit from interactivity and visual realism. The dissertation is divided into two parts describing different types of studies, in which the influence of interactive and visual realistic SEs was investigated on performance and on the sense of realism that human users experienced.

In the first part, two basic studies are described. These studies encompass five experiments addressing how people process information in an SE situation. The focus of these studies is on determining the significance of interactivity in SEs for the users’ ability

(30)

to learn spatial information. The users’ performance on spatial tasks after interactive exploration of virtual 3D objects is compared with their performance after passive observation of these objects. These basic studies investigate the cognitive processes that are affected by interactive exploration of virtual 3D objects, as are often used in SE. Based on the insights gained by these experiments, the conditions for an effective use of interactivity in SEs can be better determined. These conditions are important for designers to decide whether or not they should use SEs in their product design process. In the second part of this dissertation, two case studies are described. These studies encompass two experiments, in which the knowledge obtained from the basic studies is validated in SEs. In these studies, the significance of interactivity and visual realism in SEs on the users’ ability to learn spatial information is investigated. The design cases used are provided by companies that participated in this project, who are interested in SEs for product design. These design cases enabled to verify whether or not the results obtained with the basic studies could be generalized to practice, to the use of SEs; in other words, the ecological validity of the basic studies was assessed.

Chapter 2 specifically discusses the effect of interactivity on memory, while taking the users’ ability to memorize visual spatial information into account. Stakeholders may well vary in this ability. For example, expert-designers are probably better at memorizing 3D objects than the product’s end-users, as they are skilled in working with spatial information. In an experiment, participants first either interactively or passively explored 3D objects. Afterwards, they were tested on a mental rotation task to assess to what degree they had memorized the objects in either condition. To assess individual differences among people, the participants were divided into different groups varying in their visual spatial ability. This study was conducted to determine whether or not the designers’ decision to use SEs for their design process should depend on the type of users they want to include in their design process.

Chapter 3 further discusses the effect of interactivity on memory performance. First, the effect of interactivity on motoric processes in memory (i.e., memorizing actions) was investigated. Participants studied 3D objects interactively or passively and were then tested on a

(31)

22

mental rotation test. Subsequently, the effect of interactive exploration on visual memory was investigated. Participants were tested on an object recognition test. Next, the extent to which interactivity affects memory was investigated. The recognition of degraded and intact objects was compared in both interactive and passive conditions. Last, the role of attention in the interactive exploration of objects was investigated. This study was conducted to provide more insight in the precise benefits of interactivity for mental representations in visual spatial memory.

Chapter 4 describes the effect of visual realism on memory performance in an SE. A virtual supermarket was created that participants interactively explored. The participants’ performance on several spatial tasks in a photo-realistic SE was compared with a visually non-realistic SE to determine the importance of visual realistic SEs for human memory. First, participants navigated through either the photo-realistic or the non-realistic SE. Subsequently, they were tested to what degree they had memorized the spatial lay-out of the SE.

In Chapter 5, the effect of interactivity was investigated on the users’ subjective experience of realism in SEs, their ability to identify design problems, and their ability to solve these problems. A virtual airplane cabin including a galley was created. Participants either explored an interactive SE of the airplane cabin or watched movies or images of the airplane cabin. Subsequently, the user’s experience and spatial memory of the airplane cabin were tested. Finally, the conclusions of the research in this dissertation were summarized in Chapter 6.

(32)

REFERENCES

Adams, E. (2009). Fundamentals of Game Design (2nd Edition). Berkely, CA: New Riders.

Ashcraft, M. H. & Radvansky, G. (2009). Cognition (5th Edition). Upper Saddle River, NJ: Prentice-Hall.

Baddeley, A. (1990). Human Memory. Needham Heights, MA: Allyn & Bacon. Benson, A. J. (1992). Motion sickness. In K. B. Pandolf & R. E. Burr (Eds.),

Medical Aspects of Harsh Environments (2nd Edition) (pp. 1059-1094). Washington, DC: Department of the Army, Office of the Surgeon General, U.S.A.

Bonato, F. & Bubka, A. (2004). Visual/vestibular conflict, illusory self-motion, and motion sickness. Journal of Vision, 4(8), 798a.

Bowman, D., Kruijff, E., La Viola, J., & Poupyrev, I. (2004). 3D User Interfaces:

Theory and Practice. Boston, MA: Addison-Wesley.

Bruno, F. & Muzzupappa, M. (2010). Product interface design: A participatory approach based on virtual reality. International Journal of

Human-Computer Studies, 68(5), 254-269.

Burdea, G. C. (1996). Force and Touch Feedback for Virtual Reality. New York: John Wiley & Sons.

Burdea, G. C. & Coiffet, P. (2003). Virtual Reality Technology (2nd Edition). Hoboken, NJ: John Wiley & Sons.

Charitos, D. & Rutherford, P. (1996). Guidelines for the design of virtual environments. In C. Hart & R. Hollands (Eds.), Proceedings of the 3rd

International Conference of the UK Virtual Reality Special Interest Group,

pp. 93-111. July 3, Leicester - UK.

Chen, C., Czerwinski, M., & Macredie, R. (2000). Individual differences in virtual environments: Introduction and overview. Journal of the American

Society for Information Science, 51(6), 499-507.

Chen, J. L. & Stanney, K. M (1999). A theoretical model of wayfinding in virtual environments: Proposed strategies for navigational aiding. Presence:

Teleoperators and Virtual Environments, 8(6), 671-685.

Chitescu C., Galland S., Gomes S., & Sagot J.-C. (2003). Virtual reality within a human-centered design methodology. In S. Richir, P. Richard, & B. Taravel (Eds.), Proceedings of the 5th International Conference on Virtual

(33)

24

Cohen, M. & Wenzel, E. M. (1995). The design of multidimensional sound interfaces. In T. Furness & W. Barfield (Eds.), Virtual Environments and

Advanced Interface Design (pp. 291-348). New York: Oxford University

Press.

Cornoldi, C. & Vecchi, T. (2003). Individual Differences in Visuo-Spatial Working

Memory. Hove, UK: Psychology Press.

Cross, N. (2000). Engineering Design Methods: Strategies for Product Design

(3rd Edition). Chichester, UK: John Wiley & Sons.

Cutting, J. E. & Vishton, P. M. (1995). Perceiving layout and knowing distances: The integration, relative potency, and contextual use of different information about depth. In W. Epstein & S. Rogers (Eds.), Perception of

Space and Motion (pp. 69-117). San Diego, CA: Academic Press.

Dangelmaier, W., Fischer, M., Gausemeier, J., Grafe, M., Matysczok, C. & Mueck, B. (2005). Virtual and augmented reality support for discrete manufacturing system simulation. Computers in Industry, 56(4), 371-383. Darken, R. P. & Sibert, J. L. (1996a). Navigating in large virtual worlds.

International Journal of Human Computer Interaction, 8(1), 49-72.

Darken, R. P. & Sibert, J. L. (1996b). Wayfinding strategies and behaviors in large virtual worlds. In M. J. Tauber, V. Bellotti, R. Jeffries, J. D. Mackinlay, & J. Nielsen (Eds.), Proceedings of the ACM CHI’96 Human Factors in

Computing Systems Conference, pp. 142-149. April 14-18, Vancouver -

Canada.

Degangi, G. A. & Porges, S. W. (1991). Attention, alertness, and arousal. In C. B. Royeen (Ed.), Neuroscience Foundations of Human Performance. Bethesda, MD: The American Occupational Therapy Association.

Durlach, N. I. & Mavor, A. S. (1995). Virtual Reality: Scientific and Technological

Challenges. Washington, DC: National Academy Press.

French, M. J. (1985). Conceptual Design for Engineers. London, UK: Design Council.

Hart, R. A. & Moore, G. T. (1973). The development of spatial cognition: A review. In R. M. Downs & D. Stea (Eds.), Images and Environment:

Cognitive Mapping and Spatial Behavior (pp. 246-288). Chicago: Aldine.

IJsselsteijn, W. A., De Ridder, H., Freeman, J., Avons, S. E., & Bouwhuis, D. (2001). Effects of stereoscopic presentation, image motion, and screen size on subjective and objective corroborative measures of presence.

(34)

Innocentit, M. & Pollinit, L. (1999). A synthetic environment for simulation and visualization of dynamic systems. Proceedings of the American Control

Conference, 3, 1769-1773.

James, K. H., Humphrey, G. K., & Goodale, M. A. (2001). Manipulating and recognizing virtual objects: Where the action is. Canadian Journal of

Experimental Psychology, 55(2), 111-120.

Johnson-Laird, P. N. (1980). Mental models in cognitive science. Cognitive

Science, 4, 71-115å.

Kalawsky, R. S. (1999). VRUSE: A computerized diagnostic tool for usability evaluation of virtual/synthetic environment systems. Applied Ergonomics,

30(1), 11-25.

Kim, G. J. (2005). Designing virtual reality systems: The structured approach. London: Springer-Verlag.

Landman, R. B., Broek, E. L. van den, & Gieskes, J. F. B. (2009). Creating shared mental models: The support of visual language. Lecture Notes of

Computer Science (Cooperative Design, Visualization, and Engineering), 5738, 161-168.

Lawson, B. (2005). Designing with others. In B. Lawson (Ed.), How Designers

Think: The Design Process Demystified (4th Edition) (pp. 233-264).

Oxford, UK: Architectural Press.

Loomis, J. M., Blascovich, J., & Beall, A. C. (1999). Virtual environment technology as a basic research tool in psychology. Behavior Research

Methods, Instruments, and Computers, 31(4), 557-564.

Lu, S. C.-Y. & Conger, A. (2007). Supporting participative joint decisions in integrated design and manufacturing teams. In S. Tichkiewitch, M. Tollenaere, & P. Ray (Eds.), Advances in Integrated Design and

Manufacturing in Mechanical Engineering II (pp. 3-22). Dordrecht, The

Netherlands: Springer.

Luursema, J.-M., Verwey, W. B., Kommers, P. A. M., Geelkerken, R. H., & Vos, H. J. (2006). Optimizing conditions for computer-assisted anatomical learning. Interacting with Computers, 18(5), 1123-1138.

Ma, R. & Kaber, D. B. (2006). Presence, workload and performance effects of synthetic environment design factors. International Journal of

Human-Computer Studies, 64(6), 541-552.

Maher, M. L. & Poon, J. (1996). Modeling design exploration as co-evolution.

(35)

26

Mania, K., Woolridge, D., Coxon, M., & Robinson, A. (2006). The effect of visual and interaction fidelity on spatial cognition in immersive virtual environments. IEEE Transactions on Visualization and Computer

Graphics, 12(3), 396-404.

McLaughlin, M. L., Hespanha, J. P., & Sukhatme, G. (2002). Touch in Virtual

Environments: Haptics and the Design of Interactive Systems. Upper

Saddle River, NJ: Prentice-Hall.

Möller, T., Haines, E., & Hoffman, N. (2008). Real-Time Rendering (3rd Edition). Wellesly, MA: A. K. Peters, Ltd.

Mujber, T. S., Szecsi T., & Hashmi M. S. J. (2004). Virtual reality applications in manufacturing process simulation. Journal of Materials Processing

Technology, 155-156, 1834-1838.

Pashler, H. (1995). Attention and visual perception: Analyzing divided attention. In S. M. Kosslyn & D. N. Osherson (Eds.), Visual Cognition: An Invitation

to Cognitive Science (2nd Edition) (pp. 71-100). Cambridge, MA: MIT

Press.

Peissig, J. J. & Tarr, M. J. (2007). Visual object recognition: Do we know more now than we did 20 years ago? Annual Review of Psychology, 58, 75-96. Popescu, G. V., Burdea, G. C., & Treffiz, H. (2002). Multimodal interaction

modeling. In Stanney (Ed.), Handbook of Virtual Environments: Design,

Implementation, and Applications (pp. 435-454). Mahwah, NJ: Lawrence

Erlbaum Associates.

Preece, J., Rogers, Y., & Sharp, H. (2002). Data Gathering. In Interaction

Design: Beyond Human-Computer Interaction (2nd Edition) (pp. 291-354).

New York: John Wiley & Sons.

Ressler, S., Antonishek, B., Wang, Q., & Godil, A. (2001). Integrating active tangible devices with a synthetic environment for collaborative engineering. In S. Diehl & M. V. Capps (Eds.), Proceedings of the 6th

International Conference on 3D Web Technology, pp. 93-100. February

19-22, Paderbon - Germany.

Richard, P., Birebent, G., Coiffet, P., Burdea, G., Gomex, D., & Langrana, N. (1996). Effect of frame rate and force feedback on virtual manipulation.

Presence: Teleoperators and Virtual Environments, 5(1), 95-108.

Robles-De-La-Torre, G. (2006). The importance of the sense of touch in virtual and real environments. IEEE Multimedia: Special Issue on Haptic User

(36)

Rudolph, S. (2007). Know-how reuse in the conceptual design phase of complex engineering products, or: Are you still constructing manually or do you already generate automatically? In S. Tichkiewitch, M. Tollenaere, & P. Ray, Advances in Integrated Design and Manufacturing in Mechanical

Engineering II (pp. 23-42). Dordrecht, The Netherlands: Springer.

Slater, M. (1999). Measuring presence: A response to the Witmer and Singer presence questionnaire. Presence: Teleoperators and Virtual Environments, 8(5), 560-565.

Slater, M., Khanna, P., Mortensen, J., & Yu, I. (2009). Visual realism enhances realistic response in an immersive virtual environment. IEEE Computer

Graphics and Applications, 29(3), 76-84.

Smulders, F. E., Broek, E. L. van den, & Voort, M. C. van der (2007). A socio-interactive framework for the fuzzy front end. In A. Fernandes, A. Teixeira, and R. Natal Jorge (Eds.), Proceedings of the 14th International Product

Development Management Conference, pp. 1439-1450. June 10-12, Porto

- Portugal.

Spence, R. (1999). A framework for navigation. International Journal of

Human-Computer Studies, 51(5), 919-945.

Stanney, K. M., Mollaghasemi, M., Reeves, L., Breaux, R., & Graeber, D. A. (2003). Usability engineering of virtual environments (VEs): Identifying multiple criteria that drive effective VE system design. International Journal

of Human-Computer Studies, 58(4), 447-481.

Tideman, M., Voort , M. C. van der, and Van Houten, F. J. A. M. (2008). A new product design method based on virtual reality, gaming and scenarios. International Journal on Interactive Design and Manufacturing, 2(4), 195-205.

Van Dijk, B., Op den Akker, R., Nijholt, A., & Zwiers, J. (2003). Navigation assistance in virtual worlds. Informing Science, Special Series on

Community Informatics, 6, 115-125.

Viers, R. (2008). The Sound Effects Bible: How to Create and Record

Hollywood Style Sound Effects. Studio City, CA: Michael Wiese

Productions.

Wang, L., Shen, W., Xie, H., Neelamkavil, J., & Pardasani, A. (2002). Collaborative conceptual design - state of the art and future trends.

Computer-Aided Design, 34(13), 981-996.

Wang, H., Vergeest, J. S. M., Miedema, J., Meijer, F., Voort, M. C. van der, & Broek, E. L. van den (2008). Synthetic environment as a communication

(37)

28

tool for dynamic prototyping. In Proceedings of the 6th International

Seminar and Workshop Engineering Design in Integrated Product Development, pp. 211-219. September 10-12, Jurata - Poland.

Weyrich, M. & Drews, P. (1999). An interactive environment for virtual manufacturing: The virtual workbench. Computers in Industry, 38(1), 5-15. Wickens, C. D., Lee, J. D., Liu, Y., & Gordon-Becker, S. (2004). An Introduction

to Human Factors Engineering (2nd Edition). New York: Pearson

(38)
(39)
(40)

PART

I

(41)
(42)

Chapter

2

Representing 3D virtual objects: Interaction

between visuo-spatial ability and type of

exploration

ABSTRACT

We investigated individual differences in interactively exploring 3D virtual objects. 36 participants explored 24 simple and 24 difficult objects (composed of respectively three and five Biederman geons) actively, passively, or not at all. Both their 3D mental representation of the objects and visuo-spatial ability were assessed. Results showed that, regardless of the object’s complexity, people with a low VSA benefit from active exploration of objects, whereas people with a middle or high VSA do not. These findings extend and refine earlier research on interactively learning visuo-spatial information and underline the importance to take individual differences into account.

Meijer, F. & Broek, E. L. van den (2010). Vision Research,

(43)

34

INTRODUCTION

The ability to imagine objects three-dimensionally is crucial for object recognition. In the past decades, the underlying mechanism of object recognition is thoroughly studied. A wide variety of research fields ranging from neuro-physiology to computer vision has described how perceptions of objects lead to higher-level mental representations that support object recognition; for a review, see Peissig and Tarr (2007). It is generally theorized that mental representations of objects are the product of processing information in visual spatial working memory (VSWM). However, two important findings refine how three-dimensional (3D) mental representations are formed from two-dimensional (2D) images. First, constructing mental representations of objects is not merely a visual process. Manual interactions, both real and virtual (i.e., moving a mouse to control 3D shapes) during familiarization with objects increase to what degree mental representations are formed (Harman, Humphrey, & Goodale, 1999; James, Humphrey, & Goodale, 2001; James et al., 2002). Second, the efficiency with which 3D mental representations are formed is notably varied across groups of individuals; cf. Cornoldi and Vecchi (2003) and Voyer, Voyer, and Bryden (1995). The current paper puts these findings together and investigates individual differences in the effect of interactive exploration of objects.

Since Marr and Nishihara (1978) posed their idea how 3D object representations are formed from 2D retinal images, a large amount of empirical research is conducted on how these representations are used to recognize objects. Generally, building mental representations of objects is considered as a visual process. However, more recent studies have provided evidence that motoric processes as well play a significant role in the system underlying 3D object representations. In particular, research revealed the existence of a motoric component in imaginary object manipulations, such as mental rotations (e.g., Wexler, Kosslyn, & Berthoz, 1998; Wohlschläger & Wohlschläger, 1998; Wiedenbauer, Schmidt, & Jansen-Osmann, 2007). So, it is possible that the inclusion of this motoric component in 3D object representations facilitates better access to these representations at a later time. Thus, when a novel view of a familiar object is perceived, the object

(44)

representation is more easily mentally rotated: for example, for comparison. Consequently, the object is better recognized.

The importance of motoric activity for building mental representations in VSWM was also revealed by a study of Christou and Bülthoff (1999). These researchers compared active exploration of scenes to passive observation of an identical exploration. In their study, active explorers navigated through a 3D environment and passive observers watched a recorded movie of the active explorers. To ensure that active and passive observers attended the environment equally, they were required to respond to certain markers in the environment. Afterwards, all participants were tested on a recognition test, in which they had to identify images of familiar scenes (i.e., that they had encountered before) between images of novel scenes. Participants were able to identify unmarked familiar scenes after active exploration better than after passive observation, but there was no difference between the two conditions for marked scenes. From these results these researchers concluded that building mental representations is view dependent and that the ability to freely control viewpoints during active exploration facilitates more complete mental representations.

A similar effect of interactivity was found for the exploration of 3D objects. Harman, Humphrey, and Goodale (1998) suggested that interactive learning increases visual spatial storage of 3D objects, because it allows observers active control over their views upon which they can focus. These researchers showed that interactive exploration of objects in a virtual environment increases subsequent visual recognition of these objects. In this study, participants were instructed to study a set of novel 3D objects either interactively or passively. In the interactive condition they controlled the views of the objects manually, whereas in the passive condition they observed the same sequences of images of these objects. Next, they presented 2D images of objects on which decisions were made whether or not these objects were previously studied. Harman et al. found that performance was increased with interactively explored objects compared to passively observed objects. In addition, James, Humphrey, and Goodale (2001, 2002) showed that participants spend more time on plane views (i.e., “side” and “front”) of the objects during interactive exploration. This suggests

(45)

36

that active control over this type of views is important for visual spatial storage of objects.

However, the studies of Harman et al. (1999) and James et al. (2001, 2002) did not take an important factor into account. Processing information in Visual Spatial Working Memory (VSWM) is strongly influenced by an individual’s characteristics such as gender, age, or ability (Cornoldi & Vecchi, 2003; Stone, Buckley, & Moger, 2000; Voyer, Voyer, & Bryden, 1995). For example, Luursema et al. (2006) found that interactive learning of anatomical structures correlates with VSA. These researchers showed that especially participants with a low VSA increased their anatomical knowledge from interactive learning. These results suggest that interactive learning might trigger certain visuo-spatial processes in individuals with low VSA that aid the efficiency with which 3D information is represented.

The study described above suggested that individual differences play an important role in the formation of mental representations in visuo-spatial memory. However, the influence of VSA on interactive learning of 3D objects is not yet investigated. Therefore, in the present study we examined whether the effect of interactive learning of objects varies for groups with a different VSA. It was important to carry out this research for the reason that an effect of VSA on interactive learning can implicate the general assumption that the effect of interactivity is the same for all groups of people. Furthermore, studying the influence of VSA will further define under what conditions interactivity aids learning of visuo-spatial information, such as 3D objects.

In an experiment, we utilized a task comparable to that of Harman et al. (1999) and James et al. (2001, 2002). Participants first explored 3D objects passively or actively and, subsequently, performed a task in which the objects were tested. In addition to the previous studies, we intended to investigate whether the effect of interactively learning 3D objects was dependent on the participants’ VSA. Based on the research of for example Cornoldi and Vecchi (2003) and Luursema et al. (2006), we expected that interactive learning will support those with low VSA, whereas those with high VSA perform similar under passive or active learning conditions.

(46)

The present experiment differed from the earlier studies of Harman et al. (1999) and James et al. (2001, 2002) on the following aspects. First, the participants were tested on a mental rotation task in contrast to the previous studies, in which either a recognition task or a perceptual matching task was used. A mental rotation task requires additional mental processing and the ability to mentally transform object representations in VSWM (Shepard & Metzler, 1978). Consequently, a mental rotation task is more difficult to perform than a recognition or perceptual matching task. We expected that a mental rotation task would reveal the difference in effect of interactive learning between participants with low and high VSA more evidently. Second, to determine the participants’ VSA, they received a standard pen and paper test prior to the experiment. The results of this pen and paper test were then related to the performance in the test phase. Third, we were interested to what degree the participants formed mental representations after active and passive exploration and whether they used these representations on a subsequent test phase. Therefore, an extra condition was added to the experiment in which participants were not able to explore objects. Consequently, participants were not able to build up object representations and their performance in this condition formed a baseline for the test phase. Fourth, we added the object’s complexity as a research variable. This was done to investigate whether the effect of interactive learning depended on the object’s complexity. It is possible that participants with a low VSA benefited more from interactivity when they studied complex objects.

METHODS

Participants

36 University students (26 women and 10 men), age of 18-26 years (M = 20) participated in exchange for course credits. All participants had normal or corrected to normal visual acuity, had no known neurological or visual disorders, and were naïve concerning the purposes of the experiment.

Referenties

GERELATEERDE DOCUMENTEN

Fons Eysink is hier beheerder, Piet Verdonschot van Alterra de onderzoeker.. Een

Zich beroemen op het verleden komt als een trek van de oude man goed uit de verf, terwijl mooi belicht wordt hoe hij zijn tong nog kan roeren, een bekwaam- heid die ook Bade

Spatial Distribution of Continuum and Lines Although we detect substantial line emission from three sources (V1057 Cyg, V1735 Cyg, and HBC 722), we must consider whether the lines

The results in Table IV show that the performance of the Silverman and MSP estimators remains competitive on the 13 dimensional dataset and that the ULCV and LSCV

Die foto onder toon di e probleelll van hlgbesoedeling wat ondervind was by tiie onlaJlBse Olimpiese Spele ill Beijing en regs kan 'n oll(Jergrolidse olie broil

In the second method, referred to as Hot Plate (HP), the membrane is placed directly onto a hot surface. Two model membranes were selected to explore the heating methods:

The study compared the learning activities, performance, and learning outcomes among students who either were or were not supported by worked examples that provided an explicit

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of