• No results found

The Impact of Technology-Based Multitasking on the Accuracy of Task Performance

N/A
N/A
Protected

Academic year: 2021

Share "The Impact of Technology-Based Multitasking on the Accuracy of Task Performance"

Copied!
37
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

The Impact of Technology-Based

Multitasking on the Accuracy of

Task Performance

(2)

Layout: typeset by the author using LATEX.

(3)

The Impact of Technology-Based

Multitasking on the Accuracy of

Task Performance

Pien R.L. Spekreijse 11207671

Bachelor thesis Credits: 18 EC

Bachelor Kunstmatige Intelligentie

University of Amsterdam Faculty of Science Science Park 904 1098 XH Amsterdam Supervisor Z. Terzopoulou, MSc

Institute for Logic, Language and Computation Faculty of Science

University of Amsterdam Science Park 907 1098 XG Amsterdam

(4)

Abstract

Employees answer emails while writing code, students check incoming texts while watching a lecture, and children use social media while doing homework. Although using these online technologies simultaneously creates the impression of enhanced productivity, human performance is considerably affected by multitasking. Insights into the effects of technology-based multitasking on performance are crucial for the development of Artificial Intelligence technology. Overall, existing literature provides a general hypothesis on this effect: technology-based multitasking, during the execution of a problem-solving task on a single device, decreases the accuracy of task performance. In this study, we aimed to examine this effect by conducting an experiment using a custom-developed online environment. The experiment comprised two conditions: a control (non-multitasking) and an experimental ((non-multitasking) condition. Participants were asked to solve a visual problem-solving task within a given time-frame. There was no disturbance in the control condition, wherein the experimental condition the primary task was interrupted by a secondary numeric problem-solving task. Separating the orders of the conditions and primary tasks resulted in four sub-experiments over which the participants were randomly and equally divided. In total, 272 subjects engaged in the online-experiment, using a technological device of their individual preference (mobile phone, tablet, or computer). The main findings were that on average, subjects had an accuracy of 51.88% in the non-multitasking condition, as opposed to 45.88% in the multitasking condition. An overall significant difference between the decrease in accuracy in the non-multitasking and the multitasking condition is seen, where the percentile decrease in accuracy ranges within all sub-experiments from 2.41% to 21.36%. However, additional analysis showed significant differences in the mean values between sub-experiments. These results suggest that the relationship between technology-based multitasking and task performance might depend upon the experimental design. Each sub-experiment shows a decrease in accuracy of task performance when a subject attempts to multitask, however only when the control condition is displayed first this decrease was significant.

(5)

Contents

1 Introduction 3

2 Theoretical Context and Hypothesis 6

2.1 Technology-Based Multitasking . . . 6 2.2 The Accuracy of Task Performance . . . 9 2.3 Hypothesis . . . 10

3 Experimental Design and Procedure 12

3.1 Online Environment . . . 12 3.2 Experiment Conditions . . . 12

4 Data and Analysis 16

4.1 Data . . . 16 4.2 Results . . . 17 5 Discussion 21 5.1 Explanations . . . 21 5.2 Limitations . . . 23 6 Conclusion 25

(6)

1

Introduction

Technological devices enable and amplify multitasking to a great extent, as they can keep track of all online information at all times (Benbunan-Fich et al., 2011). Employees answer emails while writing code, students check incoming texts while watching a lecture, and children use social media while doing homework. Although using these online technologies simultaneously creates the impression of enhanced productivity, human performance is considerably affected by multitasking. Some researchers highlight how multitasking attempts have a detrimental impact on task performance, mainly when performing a combination of multiple media tasks (David et al., 2013). Alternatively, others argue that multitasking can lead to better results by allowing ideas to mature or by stimulating healthy breaks from difficult tasks (Madjar and Shalley, 2008). Despite these different perceptions, it is clear that in today’s digital society, more people are engaged in technology-based multitasking behavior than ever before. In recent years, several scientists have designed online experiments to substantiate the assumptions surrounding the consequences of technology-based multitasking, using both laboratory and real-world settings.

An example of a laboratory experiment was conducted by Adler and Benbunan-Fich (2012). Here, the researchers simulated a multitasking environment by presenting several problem-solving tasks at once and allowing participants to switch between these tasks during the time of the experiment. Half of the 205 subjects completed the tasks in the multitasking environment, while the other half completed the tasks sequentially. Data gathered during the experiment revealed an inverted-U relationship between multitasking and performance. The inverted-U relationship states that there is an empirical relationship between agitation and performance. In other words, Adler and Benbunan-Fich (2012) concluded that performance increases when a person’s agitation grows, but only up to a certain point. When the agitation becomes too high, the performance decreases.

Embedded in research on the influence of personality traits on the use of adaptive user interfaces is another example of a laboratory multitasking experiment. Gajos and Chauncey (2017) simulated a multitasking environment by asking subjects to memorize a sequence while completing an unrelated task. The rationale for this, as assumed by many social scientists, was that trying to remember a sequence in the short-term memory section of the brain is near-identical to focusing on two tasks at once (Deprez et al., 2013). The online experiment provided data to substantiate a significant negative correlation between the extra cognitive load caused by a secondary task and the use of the environment’s multitasking characteristics. With different predefined assumptions and experimental conditions to create a multitasking environment, data gathered in both experiments described above substantiates that performance steadily decreases as multitasking behavior increases.

In addition to laboratory experiments, real-world studies on students by Downs et al. (2015) and Wood et al. (2012) to assess the effect of online multitasking while studying have indicated that multitasking is negatively associated with academic performances. Performing multiple

(7)

online tasks, as opposed to focusing on one task at the time, revealed a significant decrease in exam grades. Additionally, research conducted in work environments shows that employees’ attempts to (voluntarily or involuntarily) multitask worsens the observed performance (Pachler et al., 2018) and task execution (Sonnentag et al., 2018).

As previously described, research on technology-based multitasking utilized various environ-ments, subject groups, and conditions to explicate the effect of multitasking on task performance. The predefined assumptions adopted in each experiment result in a slightly different conclusion. Human-computer interaction research is essential for the development of Artificial Intelligence technology. There are many examples for this importance. For example, when data for a machine learning algorithm is generated by people who may have been multitasking, it is necessary to understand the decision making process to gain higher quality data or to adapt the algorithms to the shortcomings of the data. Thus, understanding how people react to — and interact with — technological devices is crucial in designing machines that interact with people. Even though experiments have not focused on the accuracy of task performance as an effect of technology-based multitasking on a single device, learning more about single device multitasking is crucial. However, empirical evidence on the consequences of technology-based multitasking on a single device is scarce. Therefore, to contribute to this research domain, this thesis focuses on the consequences of performing multiple unrelated tasks in a specific time frame on a single device. By excluding other concurrent tasks on multiple devices — such as making a phone call while writing an email or attending a lecture while using the computer — and solely concentrating on multitasking on a single device, this research intends to contribute to human-computer interac-tion research and substantiate experimental findings regarding the effect of multitasking on task performance. In other words, this research aims to answer the following research question:

“What is the impact of technology-based multitasking on a single device on the accu-racy of task performance? ”

To this end, we developed an online experiment with the purpose of studying the effect of multitasking on task performance. Due to the COVID19-measures taken by the Dutch govern-ment during this research, it was not possible to conduct the experigovern-ment in a strictly controlled laboratory environment, and people were obliged to participate from the comfort of their own homes. The experiment had an experimental and control condition, comprising three problem-solving tasks. In the experimental condition, the primary task (visual) was interrupted by a different problem-solving task (numeric). In the control condition, the primary task was not in-terrupted. In total, 272 subjects engaged in the online-experiment, using a technological device of their individual preference (mobile phone, tablet, or computer).

The remainder of the thesis is structured as follows. Section 2 provides a literature review on the concepts technology-based multitasking and the accuracy of task performance. The hy-pothesis on the relationship between multitasking and accuracy follows from this analysis. The design, procedure and gathered data of the online experiment are expanded upon in Section 3.

(8)

Section 4 presents the data analysis and results, and addresses the possible endogeneity in the experimental design. After this, Section 5 discusses these results. Section 6 summarizes and concludes with recommendations for further research and contributions to the development of Artificial Intelligence technology.

(9)

2

Theoretical Context and Hypothesis

This section gives an overview of prior literature on the effects of multitasking on task per-formance. It elaborates on the concepts of technology-based multitasking and the accuracy of task performance, resulting in conclusive definitions used in the remainder of this research. A hypothesis on the answer of the research question is presented and motivated.

2.1

Technology-Based Multitasking

Researchers have used various definitions to describe multitasking. These definitions include shifting attention to perform several independent but concurring tasks (Adler and Benbunan-Fich, 2012), the encouragement to take breaks from a single complex task by switching between multiple tasks (Wood et al., 2012), or using the short-term memory part of the brain while performing a complicated task (Deprez et al., 2013). As can be seen from these examples, each of these definitions depend highly on assumptions made regarding the multitasking environment and are open to multiple interpretations, mainly based on how tasks and time-allocation decisions are defined (Benbunan-Fich et al., 2011). Three clarifications regarding different technology-based multitasking aspects are required.

First, performing a task can be a conscious or unconscious process. The human brain is capable of performing an unconscious task simultaneously with a conscious task due to the activation of different regions (Van Opstal et al., 2010). An exemplary situation is making a phone call while driving a car. However, this is a different type of multitasking and is not comparable with performing multiple tasks simultaneously in case of technology-based multitasking on a single device. Here, attempts to simultaneously tackle multiple tasks happen consciously and require using the same section of the brain (Van Opstal et al., 2010). Hence, the first clarification regarding multitasking on a single device is that it is essential that problem-solving tasks used in this research-experiment are conscious tasks.

The second clarification concerns the fact that there is a distinction in multitasking ability when a task is in the cognitive stage, integrative stage, or autonomous stage (following the Fitts and Posner (1967) theory of cognitive skills acquisition). In order to acquire the cognitive skill to solve a task efficiently, the task has to pass each stage. When a task is in the last (autonomous) stage, it is more likely that the person controls distractions effectively (Ahmed et al., 2015). Hence, multitasking might seem to be more efficient than when a task is in an earlier stage. Several factors determine which stage a cognitive skill is in. The factor ‘age’ is in respect of primary importance. Consequently, age is an essential variable to take into account when using problem-solving tasks in an experiment. An average person is able to perform a visual problem-solving task from the age of 10 (Ahmed et al., 2015).

The third clarification is addressing the variety of cognitive abilities humans can deploy. Payne et al. (2007) showed that in order to provoke multitasking behaviour, an appeal to various

(10)

cognitive abilities is required. Provided problem-solving tasks have the same level of difficulty; yet they can be divided into four cognitive categories: visual, textual, numeric, and logical (Adler and Benbunan-Fich, 2012). The third clarification is regarding these categories, since Adler and Benbunan-Fich (2012) state that the most distinct multitasking behaviour result is rendered by experimenting with a combination of tasks from different categories.

To summarize, the primary problem-solving task and the secondary problem-solving task used to simulate the multitasking condition have to suffice four requirements. First, both tasks ought to be conscious. Furthermore, they ought to be in the same skill acquisition stage, which indicates that a subject should be older than ten years old. Third, both tasks should have the same level of difficulty. Finally, the secondary task should be from a different cognitive category as the primary problem-solving tasks.

In addition to the task requirements mentioned above, time is an important concept to define. For example, more lenient time control creates a different environment than strictly monitored time restrictions. The differences in these environments have an impact on a person’s task per-formance, as well as the engagement in multitasking behaviour or not (Ariely and Zakay, 2001; Wilhelm and Schulze, 2002; Zhang et al., 2005). Salvucci et al. (2009) have shown that various multitasking behaviours with varying time units are comparable. However, in order to obtain qualitative results in an empirical experiment, the time frame in which the experiment takes place has to be controlled. Time should denote a session with a clear beginning and a clear end. This perspective towards time allows for a more comprehensive examination of task per-formance (Adler and Benbunan-Fich, 2012). Additionally, the allotted time for completing both the primary and the secondary problem-solving task has to be shorter than the average amount of time it takes for participants to finish both tasks consecutively. With such time-restrictions, the experiment creates an environment free of idleness and gaps and provides comparable results. Moreover, because the COVID19-measures taken by the Dutch government during this project require the participants to take part in the experiment at home, the total time should be less than 10 minutes to keep the participants’ attention focused.

Therefore, the time required time constraints to design the experiment in this research are as follows: time ought to be strictly controlled, quantitatively shorter than the average time it takes to solve the task at hand and the total time should be under ten minutes.

Before combing the requirements as mentioned above to a consistent definition for technology-based multitasking concerning an experiment on a single technological device, it is insightful to elaborate on some experimental paradigms and theories related to this concept. The first compelling theory regarding technology-based multitasking is a continuum designed by Salvucci et al. (2009). This research presents multitasking related to the time spent on one specific task before moving on to another: it explicates that concurrent and sequential multitasking are not entirely distinct. Moreover, the theory declares that both are compliable on the same spectrum. One side of the continuum (Figure 1, left) contains concurrent multitasking behaviour, such

(11)

seconds minutes hours Time before switching tasks

Concurrent Multitasking Sequential Multitasking

listening and note-taking watching tv and talking writing and reading email

Figure 1: The Multitasking Continuum (Salvucci et al., 2009)

as listening to a lecture and taking notes. The other side (Figure 1, right) contains sequential multitasking behaviour, such as reading an email while writing a paper. This continuum indicates that data gathered on multitasking behaviour on one side of the spectrum (switching between tasks within seconds), can indicate something about behaviour on the other side of the spectrum (switching between tasks within hours).

Secondly, examples of experimental paradigms include task switching and dual-tasks. In task-switching experiments, tasks are presented sequentially, wherein dual-tasks experiments the tasks are presented simultaneously. Adler and Benbunan-Fich (2012) present these paradigms according to two multitasking strategies: parallel or interleaved. Concurrent multitasking pur-sues the parallel multitasking strategy (Salvucci et al., 2009). This strategy is impossible for individuals to achieve, even though it seems to be the approach people unconsciously endeav-our. The reason for this impossibility is that conscious attention is impossible to divide among tasks simultaneously (Deprez et al., 2013). In other words, the brain is unable to concentrate on more than one conscious task at a time. Cognitive neuroscience research on multitasking affirms that when trying to focus on multiple unequal tasks at the same time, the brain is forced to process all the activity in its anterior region (Wallis, 2006). Although this part of the brain enables leaving a task when it is not yet completed in order to perform another task and later returning to continue processing, it cannot perform both tasks at the same time. The definition of multitasking by Deprez et al. (2013): ‘using the short-term memory part of the brain while performing a complex task’ has its inadequacy in this regard. In addition to Wallis (2006), more recent research showed that the short-term memory (also called the working memory), located in the anterior region is a cognitive resource with restricted capacity to keep knowledge only temporarily available for processing (May and Elder, 2018). The interleaved multitasking strategy is pursued in sequential multitasking (Salvucci et al., 2009). In this strategy, a task is suspended from allocating attention to another task, voluntarily or involuntarily (Adler and Benbunan-Fich, 2012). Eventually, the original task is resumed. Performing different tasks that interleave with each other is typical for technology-based multitasking on a single device (Payne et al., 2007). Consequently, the multitasking strategy to simulate technology-based multitasking in the online experiment in this research is the interleaved strategy.

(12)

In conclusion, a definition of technology-based multitasking is heavily dependent on the re-quirements for the problem-solving task and time constraints, and multitasking can be done according to different strategies. For this research on technology-based multitasking on a single device, the definition is as follows:

Definition. Technology-based multitasking is the interruption of a primary problem-solving task by a secondary task within a time-controlled environment on a single technological device.

2.2

The Accuracy of Task Performance

As discussed in the previous section, technology-based multitasking on a single device includes specific primary and secondary tasks. Switching between these tasks is caused by an interruption. This section presents the three main interruption forms. Additionally, it discusses the most reliable parameter to represent the effect of multitasking on task performance.

An interruption can be internal or external (Figure 2). An internal interruption is a voluntary decision to stop the current task and switch to another, which may be the reason for multitasking behaviour (Van Opstal et al., 2010). However, in order to obtain conclusive data with the online experiment, this form of interruption ought to be excluded. By designing the experiment in such a way that the participant does not have to make decisions regarding the order of the tasks, this exclusion is achieved. In other words, there should be only one compulsory way to complete the experiment as the voluntary decision to stop the current task is impossible to measure due to the high variability in human behaviour (Van Opstal et al., 2010). Payne et al. (2007) attempted to show this variability by studying when voluntary switching among tasks was initiated. One of their conclusions was that in most cases, voluntary switching occurred when a particular task was no longer rewarding or after the completion of a sub-task. However, it would be very complicated to replicate or measure this internal judgement in an experimental setting.

As opposed to internal interruption, there is external interruption. There are two possibilities on the occurrence of external interruption: from outside or within the device used (Salvucci et al., 2009). An external interruption from outside the device is impossible to control in a variable environment (moreover, due to the COVID19-measures taken by the Dutch government during this project, it is impossible to use a strictly controlled environment for experiments). Besides, this research excludes the external interruption from an outside source because the focus is on

Interruption

Internal External

Within Device Outside Device

(13)

multitasking on a single device. To clarify, the effect of multitasking when a person is for example interrupted by a phone call while working on the computer, is not the main focus of this study. Therefore, in this project, the effect of technology-based multitasking is studied by using external interruptions from within the electronic device to switch between tasks.

Different parameters can represent the effect of technology-based multitasking on task per-formance. For example, laboratory experiment examples include parameters such as the amount of remembered sequences when completing unrelated tasks (Gajos and Chauncey, 2017), the number of correctly filled in digits when making a Sudoku puzzle (Adler and Benbunan-Fich, 2012) or reaction time (Ariely and Zakay, 2001). In real world settings, test scores for students (Downs et al., 2015; Wood et al., 2012) and the productivity of working hours (Pachler et al., 2018) or task executions (Sonnentag et al., 2018) for employees have been used. In essence, all the previously mentioned parameters try to capture some cognitive or physical cost when people attempt to multitask. Payne et al. (2007) compared these cognitive costs associated with when a person is switching between tasks. In order to obtain conclusive and quantitative data, loss of accuracy of task performance is most reliable when working with problem-solving tasks as described in the previous section. Consequently, the parameter used to describe the effect of multitasking in this paper, is the accuracy of task performance.

2.3

Hypothesis

Various studies on the effect of multitasking on task performance have identified accuracy decre-ments. However, multiple perspectives arise in attempts at explaining this conclusive outcome. An overview of these perspectives provides a better understanding and will eventually lead to a general hypothesis concerning the online experiment conducted in this research. This section discusses a total of four perspectives.

To start with, experiments conducted by Buser and Peter (2012) and Adler and Benbunan-Fich (2012) examined the effect of three distinct multitasking conditions on task performance. Firstly, discretionary multitasking was tested by giving participants a free choice whether and how much they wanted to switch between tasks. Secondly, mandatory multitasking was tested by granting participants no choice and interrupting the tasks at hand after a particular period. Thirdly, sequentially solving tasks was tested by presenting the tasks in succession (no multi-tasking). Both studies concluded that the sequential problem-solving strategy outperformed the two multitasking attempts.

On a different note, Adler and Benbunan-Fich (2012) argued that the relationship between multitasking and task performance potentially depended on the perceived difficulty of the task and not solely on the different conditions. However, their experiment had lenient time controls which potentially led to voluntary switching and biased the multitasking condition. Additionally, the control group that solely performed problem-solving tasks in the sequential multitasking

(14)

Task A Task B Task A

Interruption lag Resumption lag

Interruption by Task B Resumption of Task A

Figure 3: Task Interruption Process (Abels, 2020)

condition was different from the experiment group. Consequently, it is likely that the inevitable high variability in human behaviour (Van Opstal et al., 2010) influences their results.

Another perspective to consider in forming a hypothesis on the effect of multitasking on task performance is that of task-goals. Deprez et al. (2013) argue that each task has a goal, and therefore when there are multiple tasks, there are multiple goals involved. Keeping all these goals in mind when solving an individual task causes a deterioration of concentration (Deprez et al., 2013). When no multitasking is involved, and only one goal exists at all times, this should result in higher concentration levels and therefore a higher level of accuracy in task performance. Research by Lee and Duffy (2015) suggests a different perspective on the decrease in accu-racy when people attempt to multitask. According to the authors, interruption and resumption lags could be explanatory for the generally longer completion time when a task is interrupted. Figure 3 presents the task interruption process, where task A is the primary task, and task B is the secondary task. Task B temporarily suspends involvement in task A, and upon completion of task B, task A is resumed. This sequential process is characteristic of an interruption pro-cess (Couffe and Michael, 2017). The interruption and resumption lags are cognitive costs and result in an efficiency decrease. Wang et al. (2012) support this perspective with research on communication interruptions. In their research, the authors asked participants to communicate via instant messaging while solving a visual pattern-matching problem. The research concludes that the performance of the problem-solving task decreased when the participant experienced interruptions.

Overall, there are different perspectives on the decrease in accuracy of task performance in multitasking research. However, a general hypothesis on the effect of multitasking on performance is obvious. This hypothesis is as follows:

Hypothesis. Technology-based multitasking, during the execution of a problem-solving task on a single device, decreases task performance.

Madjar and Shalley (2008) argue a contrariety to this hypothesis. The authors claim that multitasking can lead to better results by allowing ideas to mature or by stimulating healthy breaks from difficult tasks. However, in this research ‘better results’ indicate higher creativity, which differs highly from the parameters defined in the researches mentioned above, and is therefore incomparable.

(15)

3

Experimental Design and Procedure

This section gives an overview of the experimental design and procedure. The first part describes the online environment in which the experiment took place. The second part will elaborate on the experimental conditions and problem-solving tasks. The appendix includes more details on the experimental design.

3.1

Online Environment

An essential part of this research was the development of a strictly controlled online environment in terms of task-completion and time, where a participant could conduct the experiment on a personal technological device. The environment was hosted on the private website of the researcher (https://pspekreijse.com/), which was custom-designed for this experiment. The website included an explanatory and an experimental section.

The first part of the explanatory section provided a short introduction to the research ques-tion and a text to motivate the participant to take part in the experiment. The second part of this section presented the instructions on how the experiment would proceed. These instructions were as clear and to the point as possible to avoid any misunderstandings during the experiment. Additionally, the section provided clarifications regarding the data collection, a consent to vol-untary participation, and let the participant confirm that it is was the first time participating in this experiment.

The experimental section consisted of two primary problem-solving tasks, each with an ob-jectively correct answer. Because this research excluded a between-group design, where different subjects determine the baseline than the ones taking part in the experiment, the experiment consisted of an experimental design where the participant solved the primary task while being interrupted (experimental condition) as well as when no interruption occurred (control condi-tion). Between the two conditions, participants could take a break. The order of the conditions was random. Additionally, which of the two primary tasks each participant encountered in which condition was also decided randomly, to exclude bias induced by possible differences in difficul-ties of the problem-solving tasks. These requirements resulted in four sub-experiments which together formed the entire experiment (Figure 4). The subjects participating in the experiment were randomly divided among the four sub-experiments by a random number generator inte-grated in the design of the website. It was not necessary to built in an attention test at the start of the experiment, due to the short total time (around five minutes) and the voluntary involvement of the participants.

3.2

Experiment Conditions

The experiment had an experimental and control condition. In the control condition, the primary problem-solving task was not interrupted. In the experimental condition, the primary task was

(16)

Sub-experiment 1

Experimental Condition Control Condition

Image 1 20 sec Numeric Image 1 20 sec Image 2 40 sec Sub-experiment 2

Control Condition Experimental Condition

Image 1 40 sec Image 2 20 sec Numeric Image 2 20 sec Sub-experiment 3

Experimental Condition Control Condition

Image 2 20 sec Numeric Image 2 20 sec Image 1 40 sec Sub-experiment 4

Control Condition Experimental Condition

Image 2 40 sec Image 1 20 sec Numeric Image 1 20 sec Figure 4: Experimental Design

(17)

interrupted by a secondary problem-solving task. The primary problem-solving task was a simple visual task, as where the secondary was a more difficult numeric task. These tasks were selected to provoke an interleaved multitasking problem-solving strategy, which is most likely to occur when tasks have different duration and require different cognitive skills (Payne et al., 2007). The secondary task functioned as an interruption for the primary task. During this interruption, four numbers automatically appeared on the screen in the following order: 50 — 7 — 48 — 79. The numeric problem involved adding these numbers and keeping the sum in mind. The first two numbers were visible for 4 seconds each, while the last two numbers were visible for 8 seconds each. These time restrictions resulted in a total time of 24 seconds for the numerical task before the primary visual task was resumed. The request to remember the sum added extra difficulty to the multiple goals (Deprez et al., 2013).

(a) Image 1 (b) Image 2

Figure 5: Primary (Visual) Problem-Solving Tasks (Vasquez, 2020)

The visual task consisted of two similar images. This type of task was chosen as the primary problem-solving task because it requires mental concentration and fits in the same skill acquisition stage for all ages. The goal was to spot the differences between the two images. The visual tasks used in the experiment are shown in Figure 5a and Figure 5b. The level of difficulty was the main criterion during the selection of this task. In both tasks, the images include a total of eight differences. To ensure no idleness and gaps in the experiment, the difficulty of differences had to range from ‘easy to spot’ to ‘extremely difficult to spot’. Hence, it was expected that for the primary task there would be no participants (or a small percentage) that would spot all the differences within the given time-frame.

Additionally, to ensure no idleness and gaps in the experiment, the total time for the primary task was set at 40 seconds (with the interruption at the half-way point in the experimental condition) and was displayed beneath the task. This particular time restriction, and also the time restriction for the secondary task mentioned above, was implemented after a trial run on the experiment by ten subjects. The allotted time was intentionally shorter than the average amount of time a subject used to complete a task. With such time restrictions, the environment

(18)

was free of gaps due to early termination of tasks (Adler and Benbunan-Fich, 2012). For the numeric task, the last two numbers had a more extended period than the first two numbers, in order to eliminate some of the participant’s stress during this part of the experiment (this was feedback by eight test subjects). Although the main reason for the numeric task was to activate a different part of the brain and to induce the interleaved multitasking strategy, the intention was not to cause such high levels of stress that the subject was unable to perform any task at all.

The primary and secondary problem-solving tasks had to be answered at the end of the task. For the primary task, the participant had to select a number between 0 and 10, and there was no time limit for answering. Consequently, there was a drop-down selection menu with eleven options, so that no typing errors could occur. For the secondary task, there was a numeric input field with a set time limit of ten seconds. The screen displayed the time left to answer. When the time was up, the experiment automatically moved to the next section, and there was no value saved. With this time limit, it was not possible to cheat by, for example, taking a pen and paper and calculating the sum. Moreover, in the end, the answer to the secondary task does not matter, because the purpose of the secondary task is only to cause and effective interruption and thus whether or not the sum was correct is not relevant for nor used in the data analysis.

(19)

4

Data and Analysis

This section expands upon the data gathered in the online experiment. First, the data and the collection process is described. Then, the variables of the research are elaborated upon. All the statistical tests are performed using SPSS, a statistic software program designed for complex data analysis.

4.1

Data

Exp. 1 Exp. 2 Exp. 3 Exp. 4

0 20 40 60 Num b er of P articipan ts

Figure 6: Number of Participants per Sub-Experiment

We recruited 272 subjects to participate in the experiment in a period of two weeks. Partici-pants got no incentive or reward for taking part in the experiment, and the experimental sessions took place whenever it suited the subject. There was no specific target-group. The participants were randomly assigned to each sub-experiment, resulting in sub-groups of almost equal size (Figure 6).

Before starting with the problem-solving tasks, the experiment started with a pre-test ques-tionnaire to collect information for data analysis (age, gender, the device used to participate in the experiment and the amount of electronic usage per week). No data had to be excluded from the analysis, and none of the participants was below the ten year age limit.

The participants were aged between 10 and 85 (µ = 33.20, σ = 15.37). A slightly higher number of females participated in the experiment (141), as opposed to males (127). Four sub-jects would not reveal their gender and remain undisclosed. To ensure the randomization of participants between the sub-experiments, and to rule out alternative explanations, pre-existing differences among the sub-groups were checked. The continuous variable (age) showed no system-atic variation. The discrete variables (gender, electronic usage, and device used) were coded with dichotomous variables. Separate chi-square dependence analysis for these variables shows that all groups were equally distributed among all discrete pre-test variables (no significance value below 0.05), e.g. gender (χ2= 3.517; p = 0.742), the amount of electronic usage (χ2 = 12.725; p = 0.389) and device used (χ2 = 7.623; p = 0.267). This rules out any differences in these variable between the participant groups that could influence the results when comparing the sub-experiments.

(20)

4.2

Results

This section presents the data analysis and results. Additionally, it addresses possible endo-geneities in the experimental design. This analysis attempts to disprove the null hypothesis that there is no statistically significant relationship between the accuracy of task performance in the experimental condition (multitasking) and that in the control condition (non-multitasking):

H0: µ1= µ2

where µ1is the population mean of the accuracy of task performance in the experimental condi-tion, and µ2is the population mean of the accuracy of task performance in the control condition.

0 1 2 3 4 5 6 7 8 9 10 0 5 10 15 20 25 30 35 Spotted Differences F requency (%) NMT MT

Figure 7: Answers Primary Problem-Solving Task

Figure 7 presents the frequency of the answers to the primary problem-solving task in the non-multitasking (NMT) and the multitasking (MT) condition. It reveals that the intention for the primary problem-solving task — no participant (or a small percentage) should be able to spot all the differences within the given time-frame — is attained. Zero subjects spotted more than eight differences (which was impossible, as the maximum of visible differences between the images was eight), and the percentage of subjects that spotted exactly eight differences were minor in both non-multitasking (0.37%) and multitasking (1.10%) conditions.

Descriptive statistics of the primary variable of interest, the difference between the accuracy of task performance in the non-multitasking (NMT) and the multitasking (MT) condition, are

(21)

Table 1: Descriptive Statistics Accuracy Mean

NMT MT Decrease (%) t df Sig. (2-tailed)

Exp. 1 4.03 3.28 18.70 3.728 64 .000

Exp. 2 4.05 3.91 3.36 0.823 65 .414

Exp. 3 4.29 3.38 21.36 5.095 71 .000

Exp. 4 4.22 4.12 2.41 0.552 68 .583

Total 4.15 3.67 11.60 5.137 271 .000

NMT = Non-Multitasking Conditions, MT = Multitasking Conditions

presented in Table 1 (values are rounded off to three decimal digits). To test H0, a paired sam-ples t-test on the accuracy of task performance in the experimental (multitasking) and control condition (non-multitasking) is run. This particular test was chosen because it compares a mea-surement taken under two different conditions to determine whether there is statistical evidence that the mean difference between paired observations on a particular outcome is significantly different from zero (Kent State University Libraries, 2020c). If the calculated t-value is greater than the critical t-value with df = 271 for the 95% confidence level, H0is rejected and it can be concluded that the means are significantly different.

Exp.1 Exp.2 Exp.3 Exp.4 Total

−1 0 1

Figure 8: 95 % Conf. Interval Plots for Accuracy

On average, subjects found 4.15 differences (51.88%) in the NMT condition, as opposed to 3.67 (45.88%) in the MT condition. The percentile decrease in accuracy ranges within all experiments from 2.41 to 21.36. Overall, a significant difference between the decrease in accuracy in the NMT and MT condition (t271 = 5.137, p = 5.342E-7) is seen. This disproves the null hypothesis H0: we can conclude that there is significant evidence against the hypothesis that the accuracy of task performance remained the same between the experimental and the control condition. The error-bars for the 95 % confidence intervals of the sub-experiments are presented in Figure 8. Additionally, these values show that experiment 1&3 differ from experiment 2&4.

(22)

Table 2: Multiple Comparisons

95% Conf. Interval Dep Var. Exp. (I) Exp. (J) Mean Diff (I-J) Std. Error Sig. Lower B. Upper B.

Exp. 2 -.01 .221 1.000 -.59 .56 Exp. 1 Exp. 3 -.26 .216 .623 -.82 .30 Exp. 4 -.19 .218 .828 -.75 .38 Exp. 1 .01 .221 1.000 -.56 .59 Exp. 2 Exp. 2 -.25 .215 .663 -.80 .31 NMT Exp. 3 -.17 .217 .859 -.73 .39 Exp. 1 .26 .216 .623 -.30 .82 Exp. 3 Exp. 2 .25 .215 .663 -.31 .80 Exp. 4 .07 .213 .985 -.48 .62 Exp. 1 .19 .218 .828 -.38 .75 Exp. 4 Exp. 2 .17 .217 .859 -.39 .73 Exp. 3 -.07 .213 .985 -.62 .48 Exp. 2 -.63* .239 .042 -1.25 -.01 Exp. 1 Exp. 3 -.10 .234 .975 -.70 .51 Exp. 4 -.84* .236 .003 -1.45 -.23 Exp. 1 .63* .239 .042 .01 1.25 Exp. 2 Exp. 3 .53 .233 .102 -.07 1.14 MT Exp. 4 -.21 .235 .816 -.82 .40 Exp. 1 .10 .234 .975 -.51 .70 Exp. 3 Exp. 2 -.53 .233 .102 -1.14 .07 Exp. 4 -.74* .230 .008 -1.34 -.15 Exp. 1 .84* .236 .003 .23 1.45 Exp. 4 Exp. 2 .21 .235 .816 -.40 .82 Exp. 3 .74* .230 .008 .15 1.34

Based on observed means. The error term is Mean Square(Error) = 1.867. * The mean difference is significant at the .05 level.

between the sub-experiments. To determine whether there are significant differences in the mean values between multiple groups an one-way ANOVA (analysis of variance) Test was performed. This test compares the means of two or more independent groups in order to determine whether there is statistical evidence that the associated means are significantly different (Kent State Uni-versity Libraries, 2020b). Hence, whether there are differences between the sub-experiments and whether these differences are significant. The upper part of the table shows the NMT condition, where the seventh column with the p-values shows no value beneath the 0.05 threshold. This absence indicates that there are no differences between the sub-experiments in the accuracy re-sults within the NMT condition. This observation is supported by a chi-square analysis on the accuracy values in the NMT condition (χ2= 12.017; p = 0.939). A chi-square analysis is used to determine whether there is an association between variables, hence whether the sub-experiments in the non-multitasking condition are related (Kent State University Libraries, 2020a). Conse-quently, there is no endogeneity in the experiment design regarding the NMT condition.

However, this differs from the MT condition, where a chi-square dependence analysis shows χ2= 49.393 and p = 0.002. This indicates that there are differences between the sub-experiments in the experimental condition. The lower part of Table 2 shows the MT condition, where some sub-experiment comparisons show a significant difference. The rows that show these comparisons

(23)

are marked grey. It shows that there is a significant difference in the answers given by the same subject between the sub-experiments. These rows include experiments 1&2, 1&4, and 3&4. Besides, the significance of the difference between experiments 3&2 is close to 0.05. Hence, there is a possible endogeneity in the experimental design between experiments 1&3 and 2&4. Moreover, these differences in the significance of the accuracy of task performance decrease are visible in Table 1. Although all four sub-experiments show a decrease in accuracy in the seventh column, experiment 1 and 3 show p-values of respectfully 4.110E-4 and 3.000E-6, which indicate a significant decrease. Experiment 2 and 4, on the other hand, show p-values that do not indicate a significant decrease.

The experiment design (Figure 4) can explain this contrast. The main difference between experiments 1&3 and 2&4 is the order in which the NMT and MT conditions were presented. Consequently, experiments 1&3 displayed the experimental condition first. Thus, immediately after starting the experiment, the subject performed the primary problem-solving task while being interrupted. This order was opposite for experiments 2&4, which displayed the control condition first and the experimental condition second. Additionally, it is relevant to state that the differences between experiments 1&4 (which use image 1 in the experimental condition and image 2 in the control condition) and 2&3 (which use image 2 in the experimental condition and image 1 in the control condition) are not significant. This resemblance emphasizes that the selected images for the primary problem-solving task had no differences in difficulty or perception.

(24)

Multitasking Condition First

Task A Task B Task A

Interruption lag Resumption lag Interruption Resumption of Task A

Multitasking Condition Second

Task A Task B Task A

Interruption lag Resumption lag Interruption Resumption of Task A

Figure 9: Different Task Interruption Processes

5

Discussion

This research used an online experiment to study the relationship between technology-based multitasking and task performance on a single device. The analysis performed on the gathered data showed a difference in significance between the decrease in accuracy in sub-experiments 1&3 and 2&4 in the multitasking condition. This section discusses three possible explanations regarding this difference and why accuracy of task performance while multitasking is more depen-dent on these specific explanations than accuracy of task performance while non-multitasking. Additionally, three limitations in the interpretation of the results and the research design are acknowledged.

5.1

Explanations

Overall, each sub-experiment shows a decrease in accuracy when a subject attempts to multitask, however only when the control condition is displayed first this decrease was significant. This difference suggests that the relation between multitasking and task performance depends upon both the experimental design and the interpretation of the participant on how to solve the primary problem-solving task.

The first explanation for this difference can be deducted from comments participants left after the experiment regarding elevated stress levels. Examples of comments are “I still knew all the numbers, but I definitely needed more than 10 seconds to sum them and type in the answer.”, “Adding the numbers was pretty difficult under pressure...”, and “This was so stressful!”. These elevated stress levels after the interruption of the primary task might have caused a longer resumption lag. There is a chance that this resumption lag was longer when the interruption was in the first part of the experiment than when it took place in the second part. This is difference is illustrated in Figure 9.

A second explanation might be that in the first part of the experiment participants were less clear on what to do than in the second part. Thus, in the first seconds of the first part of the

(25)

Sub-experiment 1&3

Experimental Condition Control Condition

Image A 20 sec Numeric Image A 20 sec Image B 40 sec Sub-experiment 2&4

Control Condition Experimental Condition

Image A 40 sec Image B 20 sec Numeric Image B 20 sec

Figure 10: Experimental Design: Different Order of Conditions

experiment, participants were still figuring out what to do, which left them with less time to look at the first image before getting stressed by the interruption. Because the amount of lost time in the first part of the experiment when the experimental condition was displayed first was comparatively much more than when the control condition was displayed first, this might have had a greater negative effect on their accuracy of task performance. The differences are visible in Figure 10. This possible explanation could indicate that the instructions on the primary task were not read intensively or were not clear enough.

Third, an explanation might include pattern recognition. The theoretical context discussed the multitasking perspective of task-goals, where Deprez et al. (2013) argue that each task has a goal, and therefore when there are multiple tasks, there are multiple goals which cause a deterioration of concentration. Their study examined thirty-three participants with functional brain imaging (fMRI) to investigate the neural substrates of multitasking. They found enhanced brain activity during multitasking in the same region while doing a short-term memory task as when doing a visual temporal same-different task. Deprez et al. (2013) conclude that this indicates the involvement of multitask-specific components of the brain in holding of visual stimuli in short-term memory. Retracting the conclusion presented by Deprez et al. (2013) on this paper, it might be an explanation for the differences in significance in the accuracy decrease. In simple words, when someone performs a task, this action creates a pathway in the brain. A recognition pattern is activated when the same task is presented again, which allows a person to deal with the same sort of information more quickly. So, when a participant has to perform the same task for a second time, there are better pathways in the brain for that task, and the execution of that task is quicker.

To summarize, only when the control condition in the experiment is displayed first the anal-ysis showed a significant decrease in accuracy of task performance. There are three possible explanation for this phenomenon. First, the elevated stress levels during the experiment (af-ter the in(af-terruption of the primary task), mentioned by participants in comments, might have caused a longer resumption lag. Second, the instructions on the primary task might have not

(26)

been read intensively or were not clear enough. Third, stronger paths in the brain caused by pattern-recognition led to a quicker execution of the primary task when it was displayed for a second time.

5.2

Limitations

For a qualitative interpretation of the results, there are four main limitations to be acknowledged and discussed. The first limitation derives from the selection of the research design because unknown parameters from outside the online experimental environment could have influenced the empirical findings. For example, a subject could have experienced an external interruption during the experiment, which produced a not measurable effect on the performance. Additionally, as discussed in the theoretical framework, an external interruption within the device used was the choice for the research design. A recommendation on future studies is the additional allowance for internal interruptions and acting upon these interruptions, and that the research design is consistent with measuring these interruptions in order to clarify the effect on the accuracy of task performance. As in everyday life, internal interruption posses a significant part of performance behaviour. Opposed to this, highly controlled scenarios do not necessarily reflect how individuals are affected with interruptions in the real world. People perform both scheduled and spontaneous tasks during their day, and although this is less controllable, the lack of biological validity in an experiment affects the possible interpretations of the results.

Secondly, it should also be noted that the subjects contributing to this experiment were not entirely randomly selected, as most of them were from within the same societal borders as the researcher. Even though the pre-test variable age was equally distributed, there is nothing known on the personal background of the participants, such as level of education or upbringing, that might have influenced the test results. This personal information was not collected for privacy reasons.

Thirdly, participants did not benefit or lose anything with the result of their experiment. There was no compensation for their participation, and they did not get a reward if they per-formed better. Consequently, people had no external motivation for doing their best and staying focused during the experiment. Nevertheless, the reviews and feedback on the test show that participants took the test seriously.

Finally, there is a potential limitation regarding the answering field of the primary visual task. The correct answer for the visual task is eight, and the range of numbers subjects were able to choose from was from zero to ten. Consequently, it was not possible to make a mistake in one direction (under-estimation) but only in the other direction (over-estimation). When a participant sees this range and has no interest in coming up with a valid answer, the most likely random answer might be the average: five. This is close to the peak of the distribution of answers observed. Although participants took the test seriously, if a participant was not mentally engaged enough with the experiment, an affect of this kind may or may not have interfered with

(27)

the results of this research.

Overall, the limitations from the choice of experiment conditions (an online experiment in an uncontrolled external environment), the characteristics of the participants, the uncertainty in the seriousness of the participants towards their experiment execution, and the experimental setup suggest that caution is necessary when the results of this experiment are reflected upon other populations and other online settings.

(28)

6

Conclusion

This section summarises and concludes with recommendations for further research and contri-butions to the development of Artificial Intelligence technology.

Employees answer emails while writing code, students check incoming texts while watching a lecture, and children use social media while doing homework. Even though people attempt to multitask daily, little is known about the accuracy of task performance while multitasking on a single technological device. The existing literature shows unambiguously that multitasking affects task performance. In recent years, scientists have designed online experiments to sub-stantiate the assumptions surrounding the consequences of technology-based multitasking, using both laboratory and real-world settings. With different predefined assumptions and experimental conditions to create a multitasking environment, data gathered in both kinds of experiments sub-stantiates that task performance steadily decreases as multitasking behaviour increases. Overall, a general hypothesis on the effect of multitasking on task performance is obvious: Technology-based multitasking, during the execution of a problem-solving task on a single device, decreases the accuracy of task performance.

This study contributes to previous literature on the effect of multitasking on task performance by providing new evidence regarding this obvious hypothesis by conducting an experiment spec-ified on performing multiple tasks on a single technological device. An analysis on the newly gathered data provides a conclusive answer on the research question posed in the introduction of this paper: “What is the impact of technology-based multitasking on a single device on the accuracy of task performance? ”. This answer states that technology-based multitasking on a single device has a negative impact on the accuracy of task performance.

The experiment had a control (non-multitasking) and an experimental (multitasking) con-dition, comprising three problem-solving tasks. Different orders of these conditions and tasks resulted in four sub-experiments. Additional analysis suggests that the relationship between technology-based multitasking and task performance is dependent upon the experimental de-sign. Each sub-experiment shows a decrease in accuracy of task performance when a subject attempts to multitask, but only when the control condition is displayed first this decrease was significant. There are three possible explanations for this phenomenon. First, the elevated stress levels during the experiment (after the interruption of the primary task), mentioned by partici-pants in comments, might have caused a longer resumption lag. Second, the instructions on the primary task might have not been read intensively or were not clear enough. Third, more definite paths in the brain caused by pattern-recognition led to a quicker execution of the primary task when it was displayed for a second time. The limitations from the choice of experiment condi-tions (an online experiment in an uncontrolled external environment), the characteristics of the participants, and the uncertainty in the seriousness of the participants towards their experiment execution suggest that caution is necessary when the results of this experiment are reflected upon other populations and other online settings.

(29)

Recommendations for further research on the data provided by this experiment can include analysing the possible differences in the effect of multitasking based on alternative technology usage (the average time spent on a technological device) and the influence of the technological device used during the experiment (mobile phone, tablet or computer). These variables could have an additional influence on the effect of multitasking on performance and require a more extensive analysis. Additionally, an option to extend this study is to add another condition (voluntary choice to switch between tasks) and compare these results on performance. Another option is to allow participants to include personal tasks (such as checking an incoming text or using social media). This would create a more representative online environment and could potentially form a bridge between laboratory and real-world experiments.

A further extension regarding this study might include adding a method to examine the observed difference in accuracy of task performance between having the control condition shown first or second, for example, by conducting the same experiment with new subjects and a slightly different experimental design. Replacing the four sub-experiment with two different options, encountering either the control or the experimental condition twice, generates a new data set. By comparing this specific data set with the data generated in this study might provide insights into the origin of the difference in accuracy of task performance.

Technological devices enable and amplify multitasking to a great extent, as they can keep track of all online information at all times. This research suggests four main contributions to the development of Artificial Intelligence technology regarding multitasking abilities. First, it might be beneficial for higher task performance rates on technological devices to develop smart information and communication technology that knows when an employee or student can be interrupted (for example, when a task has just been completed). This extension would create sequential multitasking rather than interleaved multitasking and thus potentially optimise multitasking performance. Second, embedding an informative section in the technological device to provide users with insight into their multitasking behaviour (and thus the decrease in task performance this behaviour entails) might be beneficial. This extension could alter the way the device is used in certain situations. Third, some researchers think of Artificial Intelligence as a means to better understand intelligence, including human intelligence. This research contributes to this objective by generating data that provide insights into the effect of multitasking on task performance on a single electronic device. Last, recent research on the design of decision support systems that make recommendations based on the aggregated input of several human experts has incorporated the assumption of that input being incomplete, due to variance in the multitasking of the experts (Terzopoulou and Endriss, 2019). This thesis contributes to understanding how reliable the information provided by such experts will be, and can consequently help create better aggregation mechanisms.

(30)

References

Abels, E. (2020). Technology-based multitasking from the lab to the real world (Master’s Thesis). University of Amsterdam. Available at https://scripties.uba.uva.nl/search?id=711760. Adler, R. F. and Benbunan-Fich, R. (2012). Juggling on a high wire: Multitasking effects on

performance. International Journal of Human-Computer Studies, 70(2):156–168.

Ahmed, A., Ahmad, M., Stewart, C. M., Francis, H. W., and Bhatti, N. I. (2015). Effect of distractions on operative performance and ability to multitask—a case for deliberate practice. The Laryngoscope, 125(4):837–841.

Ariely, D. and Zakay, D. (2001). A timely account of the role of duration in decision making. Acta Psychologica, 108(2):187–207.

Benbunan-Fich, R., Adler, R. F., and Mavlanova, T. (2011). Measuring multitasking behavior with activity-based metrics. ACM Transactions on Computer-Human Interaction (TOCHI), 18(2):1–22.

Buser, T. and Peter, N. (2012). Multitasking. Experimental Economics, 15(4):641–655.

Couffe, C. and Michael, G. A. (2017). Failures due to interruptions or distractions: A review and a new framework. American Journal of Psychology, 130(2):163–181.

David, P., Xu, L., Srivastava, J., and Kim, J.-H. (2013). Media multitasking between two conversational tasks. Computers in Human Behavior, 29(4):1657–1663.

Deprez, S., Vandenbulcke, M., Peeters, R., Emsell, L., Amant, F., and Sunaert, S. (2013). The functional neuroanatomy of multitasking: combining dual tasking with a short term memory task. Neuropsychologia, 51(11):2251–2260.

Downs, E., Tran, A., McMenemy, R., and Abegaze, N. (2015). Exam performance and attitudes toward multitasking in six, multimedia–multitasking classroom environments. Computers & Education, 86:250–259.

Fitts, P. M. and Posner, M. I. (1967). Human Performance. Brooks. Cole, Belmont, CA, 5:7–16. Gajos, K. Z. and Chauncey, K. (2017). The influence of personality traits and cognitive load on the use of adaptive user interfaces. In Proceedings of the 22nd International Conference on Intelligent User Interfaces, pages 301–306.

Kent State University Libraries (2020a). SPSS Tutorials: Chi-Square Test of Independence. Ac-cessed June 1, 2020. Available at https://libguides.library.kent.edu/SPSS/ChiSquare. Kent State University Libraries (2020b). SPSS Tutorials: One-Way ANOVA. Accessed June 1,

(31)

Kent State University Libraries (2020c). SPSS Tutorials: Paired Samples t-test. Accessed June 1, 2020. Available at https://libguides.library.kent.edu/SPSS/PairedSamplestTest. Lee, B. C. and Duffy, V. G. (2015). The effects of task interruption on human performance: A

study of the systematic classification of human behavior and interruption frequency. Human Factors and Ergonomics in Manufacturing & Service Industries, 25(2):137–152.

Madjar, N. and Shalley, C. E. (2008). Multiple tasks’ and multiple goals’ effect on creativity: Forced incubation or just a distraction? Journal of Management, 34(4):786–805.

May, K. E. and Elder, A. D. (2018). Efficient, helpful, or distracting? a literature review of media multitasking in relation to academic performance. International Journal of Educational Technology in Higher Education, 15(1):13.

Pachler, D., Kuonath, A., Specht, J., Kennecke, S., Agthe, M., and Frey, D. (2018). Workflow interruptions and employee work outcomes: The moderating role of polychronicity. Journal of Occupational Health Psychology, 23(3):417.

Payne, S. J., Duggan, G. B., and Neth, H. (2007). Discretionary task interleaving: Heuristics for time allocation in cognitive foraging. Journal of Experimental Psychology: General, 136(3):370. Salvucci, D. D., Taatgen, N. A., and Borst, J. P. (2009). Toward a unified theory of the multitask-ing continuum: From concurrent performance to task switchmultitask-ing, interruption, and resumption. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 1819–1828.

Sonnentag, S., Reinecke, L., Mata, J., and Vorderer, P. (2018). Feeling interrupted—being responsive: How online messages relate to affect at work. Journal of Organizational Behavior, 39(3):369–383.

Spekreijse, P. (2020). Multitasking experiment. Accessed May 1, 2020. Available at https: //www.pspekreijse.com/multitasking-experiment/.

Terzopoulou, Z. and Endriss, U. (2019). Optimal truth-tracking rules for the aggregation of incomplete judgments. In Proceedings of the 12th International Symposium on Algorithmic Game Theory, (SAGT) pages 298–311.

Van Opstal, F., Gevers, W., Osman, M., and Verguts, T. (2010). Unconscious task application. Consciousness and Cognition, 19(4):999–1006.

Vasquez, M. (2020). Can you spot the difference in these 10 pictures? Accessed April 24, 2020. Available at https://www.rd.com/spot-the-difference/.

(32)

Wang, Z., David, P., Srivastava, J., Powers, S., Brady, C., D’Angelo, J., and Moreland, J. (2012). Behavioral performance and visual attention in communication multitasking: A comparison between instant messaging and online voice chat. Computers in Human Behavior, 28(3):968– 975.

Wilhelm, O. and Schulze, R. (2002). The relation of speeded and unspeeded reasoning with mental speed. Intelligence, 30(6):537–554.

Wood, E., Zivcakova, L., Gentile, P., Archer, K., De Pasquale, D., and Nosko, A. (2012). Ex-amining the impact of off-task multi-tasking with technology on real-time classroom learning. Computers & Education, 58(1):365–374.

Zhang, Y., Goonetilleke, R. S., Plocher, T., and Liang, S.-F. M. (2005). Time-related behaviour in multitasking situations. International Journal of Human-Computer Studies, 62(4):425–455.

(33)

Appendix

(a) Introduction

(34)

(c) Pre-Test Questionnaire

(d) Control Condition

(35)

(f) Break

(g) Experimental Condition - Visual Task

(36)

(i) Experimental Condition - Visual Task

(j) Experimental Condition - Visual Task Answer

(37)

(l) Wrap Up

Referenties

GERELATEERDE DOCUMENTEN

Kweekbaarheid en effectiviteit van nieuwe predatoren en insecten- pathogenen tegen Californische trips (sierteelt), tabakstrips (groente- teelt) en gladiolentrips (bollenteelt

The variables are as follows: NPL is the ratio of non-performing loans to total loans, z-score is the capital asset ratio in current period t plus the return on

18 .  Tijdens  deze  opgraving  werd  gedurende 3 maanden een oppervlakte van 532 m² onderzocht door middel van 4 werkvlakken die machinaal  werden  aangelegd: 

FOTO ONDER LINKS: Johan CJaassenn glimlag breed terwyl die Theo 's -manne em bulle held saamdrcm. Sccalrs tcrwyl rom Piet to

(2007:164) that research done on women entrepreneurs in South Africa requires training in guidance and advice in compiling business plans, market research, identifying business

The underlying process factor engaged to the tutor’s role in the first year at the Mechanical Engineering department is that the tutor act as catalyst of activating the

CHD-FA have twenty-four principal backbone structures embedded in it’s supramolecular structure, including malic acid, maleic acid, levulinic acid, succinic acid,

Even though complete Sn etching is achieved on all three samples, the etch rate is significantly slower for a thin scandium oxide layer, and even slower for a Sc oxide