• No results found

Using interactive computer systems in autobiographical writing about life experiences

N/A
N/A
Protected

Academic year: 2021

Share "Using interactive computer systems in autobiographical writing about life experiences"

Copied!
13
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Supervisor: Dr. F.M. Nack

Second Reader: Dr. A.C. Nusselder

Date:

June 16, 2015

Using interactive computer systems in autobiographical

writing about life experiences

by

Jeroen Wever (#10108939)

Master of Information Science – Human Centered Multimedia

University of Amsterdam – Faculty of Science (FNWI)

(2)

Using interactive computer systems in autobiographical

writing about life experiences

Jeroen Wever

University of Amsterdam,

Science Park 904, 1098 XH, Amsterdam, Netherlands

jeroenrwever@gmail.com

ABSTRACT

Candidates of a System for Identifying Motivated Abilities (SIMA) assessment experience difficulties in how to write descriptions of life activities they enjoyed doing. This study was conducted to explore and define the parameters that relate to the performance on quality in relation to writing about life experiences, which are essential to the outcome of a SIMA-assessment. Building a prototype explored how SIMA-candidates could be supported to improve the quality of an activity description. An experiment was conducted to evaluate the prototype and revealed (although not significant) that the prototype did improve the performance with respect to quality.

1. INTRODUCTION

In the modern business age, more organizations start to believe that empowering employees improves their productivity, motivation and satisfaction. As a result employees improve their work performance and organizations perform better (Fernandez & Moldogaziev, 2013). The use of the System for Identifying Motivated Abilities (SIMA) can help a person to gain insights about his/her motivational needs and from there one can start to improve his/her performance. SIMA is a proprietary, non-psychological, qualitative method for identifying a person's Motivated Abilities Pattern (MAP), and was developed by Arthur F. Miller (Philips & Kessel, 2013), who is the founder of the SIMA International Group. A SIMA-assessment is a time consuming process. It takes about four to six weeks to conduct the whole assessment. It is important that the candidate takes time to discover and describe his or her activities, because the quality of the assessment's outcome is largely dependent on these descriptions. Descriptions in a assessment are based on life experiences and feelings of the author, therefore it can be considered as autobiographical writing. An autobiography can be seen as a literary version of a self-portrait (Howarth, 1974), in which the subject is both writer and subject of writing. The writer must alternately switch between the self as the subject in the story and the self in the activity of writing the story. That is what makes autobiographical writing different from writing fiction or nonfiction. Candidates experience this as a difficult process and as a result, the provided descriptions do not capture the essence of the chosen activities. The outcome of this shortcoming is that more sessions with the coach are necessary, which prolongs the overall process considerably. Because writing the descriptions is the most important part of the assessment, it is useful to facilitate candidates while writing their descriptions. A writing tool that adds domain knowledge, guides writing with instructions and self-reflection will help candidates to write better descriptions.

This study was conducted to explore and define the term quality in relation to writing about life experiences, which is part of a SIMA-assessment and essential to the SIMA-assessment’s outcome. It also explores the possibilities of improving the quality of these writings. This paper first describes the related work. Second it states a requirement analysis, which was used to explore and define quality.

Third it describes a prototype that was based on the requirement analysis. Fourth it describes an experiment, which explores the usefulness of the prototype. Finally, the paper ends with a discussion section and a conclusion with proposals for future work.

2. RELATED WORK

According to Ryan & Deci (2000b) in Self Determination Theory (SDT), a person has innate psychological needs and has the need for competence (conquering of optimal challenges), relatedness (involvement and belongingness) and autonomy (freedom from excessive control). Blumberg & Pringle (1982) define three dimensions of work empowerment, stated in Table 1.All three dimensions of work empowerment must be present in order for performance to occur. When all dimensions of work empowerment and the needs form SDT are fulfilled properly it permits an optimal development and functioning of a person. However, before a person is able to fulfill their needs and dimensions of work empowerment one has to make the decision to do so, only if this decision is made one can start to discover.

Table 1: Dimensions of work performance (Blumberg & Pringle, 1982, pp. 563, 565)

Dimension Description

Capacity to perform Refers to psychological and cognitive capabilities that enable an individual to perform a task effectively, (e.g. health, skills, intelligence, motor skills)

Willingness to perform Refers to the psychological and emotional characteristics that influence the degree to which an individual is inclined to perform a task comprise (e.g. motivation, norms, and values).

Opportunity to perform Refers to the particular configuration of the field of forces surrounding a person and his or her task that enables or constrains that person's task performance and that are beyond the person's direct control (e.g. tools, equipment, leader behavior, rules).

Gaining insights about a person’s needs and dimensions of work performance can be accomplished by the use of SIMA. SIMA assumes that every person has been endowed with a uniquely motivated and purposeful behavioral pattern (Philips & Kessel, 2013). When people live and work in accordance with their pattern, they experience remarkably productive and meaningful lives. With a assessment one can gain insights in one’s MAP, which identifies five aspects (stated in Table 2) of a person's functioning.

SIMA focuses on one’s motivation. Ryan & Deci (2000a, p. 54) define motivation as: “To be motivated means to be moved to do something. A person who feels no impetus or inspiration to act is thus characterized as unmotivated, whereas someone who is energized or activated toward an end is considered motivated.” Ryan & Deci’s (2000b, p. 55) SDT distinguishes between intrinsic and extrinsic motivation. Intrinsic motivation refers to "...doing something because it is inherently interesting or enjoyable". This type of motivation is about motivation that emerges from within a person. People who perform an activity with intrinsic motivation 2

(3)

are motivated because they feel satisfied when doing. Extrinsic motivation refers to "...doing something because it leads to a separable outcome". This type of motivation emerges from external rewards. One is motivated to reach a certain outcome that leads to a reward. SIMA does not distinguish between intrinsic and extrinsic motivation but assumes that one's personality drives one’s motivation and that one's personal characteristics are determinative for arising and continuing of one’s motivation (Philips & Kessel, 2013).

Porter, Bigley & Steers (1975, p. 1) describe that there is no all-embracing concept or model that describes motivation. They indicate how the term “motivation” has been used in different ways, by stating a brief selection of representative definitions in which they distinguish three denominators: (1) Motivation is about energizing of human behavior; (2) Motivation is about directing human behavior; (3) Motivation is about retaining behavior. These similarities are important in understanding human behavior in a work environment (work behavior). According to these definitions, motivation exist of three components. (1) There is an energizer that facilitates people to show certain behavior. Aspects from one's surroundings can influence this behavior. (2) It has a goal: human behavior is always directed. (3) Someone's surroundings provide feedback, which is used in adjusting or abandoning one's goal.

Table 2: Motivated Abilities pattern aspects (Philips & Kessel, 2013, p. 25)

Aspect Description

Motivating abilities The motivated abilities the person naturally and instinctively uses to accomplish anything that matters to him/her.

Motivating subjects The subject matter that one most naturally works on, works with, or works through.

Motivating circumstances

The circumstances or environmental conditions in which he or she thrives.

Motivating relations The roles and relationships one prefers to have relative to others.

Central motives: The central motivational thrust or "payoff" that drives the person's behavior. Discovering a person's MAP involves a qualitative analysis of a candidate's life activities.

In theory a SIMA-assessment is organized in the three phases (Philips & Kessel, 2013). In the first phase, a candidate and coach have an intake meeting in which the coach illustrates the process of the assessment, makes agreements on deliverables and discusses the intended goals of the candidate for the assessment. These goals are taken into account in the remainder of the assessment. In this phase, a candidate also has to write eight stories about activities he or she experienced in life. For describing activities, a candidate receives an activity form that gives guidance on how to write about activities of choice. After a candidate wrote his or her stories they are sent to the coach. In the second phase, the candidate and coach have an interview in which they discuss the activities. This is to clarify the stories. Depending on how well activities are described, this meeting will take about two to four hours. In the third phase, the coach analyses the stories and creates a draft version of the candidates MAP. Afterwards the candidate and coach discuss the draft version and the coach argues his/her decisions in creating the MAP. If needed they jointly propose adjustments for the draft version. The coach creates the final report afterwards and includes final recommendations to complete the MAP.

Since writing is one of the most complex human activities (Norhafizah, Zakaria, Aziz, Nor Rizan, & Maasum, 2010) a person can be supported through the use of a computer system. There are

a number of computer-based writing systems available. One group generates fictional stories by using algorithms to create coherent and accurate stories. TALE-SPIN (Meehan, 1977) uses problem solving of goal-oriented characters to generate stories in natural language. MINSTREL (Pérez y Pérez & Sharples, 2004) uses case based reasoning to generate stories about King Arthur and his Knights of the round table. MEXICA (Pérez y Pérez & Sharples, 2004) uses a computer model based on an engagement reflection of creative writing and generates stories about the indigenous people of the Valley of Mexico, better known as the Mexicas. BRUTUS (Pérez y Pérez & Sharples, 2004) generates stories about predefined themes and uses these as the main approach on story generation. Another group facilitates a writer with story structuring. Dramatica (Write Brothers, 1994) presents a user with pre-defined forms to fill in, for example, characters and their characteristics, and uses its own story paradigm to structure the complete story. Inform7 (Nelson, 2009) and Twine (Klimas, 2009) are systems for writing interactive fiction. Interactive fiction is a literary form that involves programming a computer so that it presents a reader with a text, which can be explored, by interacting with the story that results in different paths. These systems aim to make the programming part as light as possible, so that the author can focus on its writing. They are widely used to create game-concepts. Fargo (Small Picture, 2014) is an outliner tool for composing documents. Outlines are a kind of mental tree, in which a user can structure a document. Descriptions written for SIMA are autobiographical and should derive from a person’s life experience. TALE-SPIN, MINSTREL, MEXICA and BRUTUS do not have the possibility to incorporate life experience. Story generators do not suit the requirements a SIMA story needs. Inform7, Twine and Fargo help users to structure stories but do not use any domain knowledge or story structure to support an author. Dramatica does include domain knowledge for writing stories, but does not meet the requirements a description of an activity needs. A system that helps users write descriptions of activities and meets the requirements of a SIMA-story, should add domain knowledge for SIMA. To discover the parameters of such a system this study will answer the research question stated below:

"What are the parameters for an interactive online SIMA-assessment interface to improve the quality of describing personal activities?"

3. REQUIREMENT ANALYSIS

The requirement analysis focused on two goals. The first goal was to gather insights to define quality of a description of an activity and the second goal was to gain more insights of the problem. To accomplish these goals focused interviews (Robson, 2011, p. 289) where held with SIMA-experts.

3.1 Interviews

3.1.1 Design

Interviews were semi-structured and were divided in three parts. In the first part, the interviewer asked the interviewee about its background. The second part of the interview focused on the quality of a description of an activity. The third part of the interview was about the process of an assessment. See Table 3 for further details about the main questions.

3.1.2 Participants

All interviewed experts had to be a certified SIMA-coach. All interviewees were recruited from the researcher’s professional network. In total five experts were interviewed, all of them were 3

(4)

male. All of them conducted between 3 – 60 assessments with an average of 39.80 assessments per individual. Two of the interviewees were also certified to train and certify others to become a SIMA-coach.

Table 3. Main interview questions

Question type Question

Background questions

Who are you and what is your relation with SIMA? How many SIMA-assessments have you conducted? Quality questions What is your definition of an activity?

What is your definition of a story? What are the constraints in writing a story? Which aspects have to be incorporated in the story? What is your view on the difficulty of writing stories for SIMA?

Do you expect stories to be related with each other? Process questions How does a typical assessment progress?

How much time does this process cost? Why do you reject stories?

How do you advise people to write their stories?

3.1.3 Procedure

The interviews were held in Dutch, in an office setting and and no reimbursement was provided. The interviews took between 33 and 47 minutes to complete. The interview started with the interviewer introducing himself, in which he explained the goal of the interview and asked the interviewees for their permission to record and if so sign an informed consent. After the introduction the interviewer started recording and began with the background questions, followed by the quality and process questions. Next, the interviewer stopped the recording and thanked the interviewee.

3.1.4 Coding

After the interviews were held, the interviewer transcribed all the interviews literally, leaving out stopgaps and fillers. The transcriptions were sent to the interviewees with a request for evaluation; if no comments were present, they were not obligated to respond. No responses came back. Next, the interviewer annotated the transcriptions one-by-one by iterating through all main questions, locate and annotate the corresponding answer, followed by coding the interviews. Coding was done per interview, by (1) reading the transcription completely; (2) writing down the first thoughts about how an activity should be described; (3) reading the annotated answer and writing down the keywords and key-phrases (further referred to as keywords) Next similar keywords were merged and were counted by how many interviewees stated them. The keywords that occurred the most were seen as the most important part of an answer.

3.2 Results

The quality of an activity description is determined by two factors. Firstly, it is important that a SIMA-candidate selects a correct activity for description. Asking interviewees, the question “What is your definition of an activity?” (a quality question stated in Table 3) revealed that a candidate has to distinguish an activity from an experience. A candidate should not write about experiences but about activities. These differ from each other in that one does not exhibits active behavior with a certain purpose in an experience, but does in an activity. For example, as one interviewee stated (translated from Dutch):

“An activity is something one actually does, it isn’t about… an experience one experiences. ‘Walking in the sun… or walking in the sun because it felt good.’ No, it really is what one does at that moment, e.g.: ‘I am walking from A to B

with the aim of…’ Therefore, an activity is really an act one performs. It doesn’t have to be a physical act, it could also be an act of thought, but is has to be something one does with a certain purpose, to achieve something.” (SIMA-coach, conducted 3 assessments)

Secondly, a candidate has to write a description in a certain form and certain aspects of an activity should be incorporated. Asking interviewees, the question “Which aspects have to be incorporated in the story?” (a quality question stated in Table 3) revealed 24 parameters, which are described in Table 4. All parameters that were mentioned by four or more interviewees were marked as important.

Table 4: Activity quality parameters

# Parameter # Parameter

01 Active behavior * 13 Skills performed * 02 Specific behavior 14 Approach of operations * 03 Specific moment 15 Subjects

04 Positive * 16 Nouns indication subjects 05 First-person perspective * 17 Establishment 06 Single activity * 18 Who established 07 No analysis on own behavior 19 Thoughts 08 Enjoyment * 20 Situation 09 Feeling of satisfaction * 21 Own role *

10 Success 22 Other persons involved * 11 Operations performed * 23 Roles of involved persons * 12 Verbs indicating operations 24 Share in activity

* important parameter

The answers interviewees provided on the question “How does a typical assessment progress?” (a process question stated in Table 3), shows that the application of the process of a SIMA-assessment differs from the in theory defined three-phase process. The practical application is extended with a fourth phase that is dedicated towards an application interview. This interview starts with answering the question defined in the first phase of the process and is followed by creating a plan to incorporate the MAP in the candidate’s daily lives. The question described above was followed by the question “How much time does this process cost?”. The interviewees stated that a candidate needs four to eight hours, spread over a period of three weeks, to describe their activities. An interviewee also provided the document that is currently used by candidates to write their activity descriptions. They write them in a document that contains: (1) instructions and (2) eight activity templates, one template for each activity. The document has a sequential design and starts with four pages of instructions and guidelines. The activity template, as shown in Figure 1, contains the five questions stated in Table 5. Candidates usually follow the process stated in Figure 2 and describe their activities by the use of a word processor. The interviews also raised problems candidates experience during an assessment. According to the interviewees, candidates experience multiple problems when creating a long list of activities. First, they have difficulties to start remembering and to determine if an activity really gave them the actual feeling of satisfaction. Second, they have trouble in comparing and selecting the activities with the most satisfying feeling. Third, the difference between an experience and an activity is perceived as vague and needs review by the coach. Fourth, the descriptions of activities should incorporate certain aspects, but not all aspects are explicitly asked in the activity form. Fifth, the use of the word story instead

(5)

of a description of an activity causes candidates to write their descriptions in a narrative way.

Figure 1: Activity template impression Table 5: Activity template questions

# Question 1 Title of the activity. 2 Short summary of the activity

3 How did you committed to carry out this activity how did your participation established?

4 How have you handled it, what exactly did you do, what did you say and what did you think?

5 What made this activity enjoyable for you? What exactly gave you a feeling of satisfaction?

3.3 Discussion

The quality factors marked as important in Table 4 are key for the quality of descriptions, because they form the foundation of a useful description and a SIMA-coach has enough input to conduct an efficient interview. If descriptions do not contain the important parameters, the description would not be useful and a coach could reject the description. Rejection happens rarely; instead, a coach tries to extract the missing aspects during the interview. This requires valuable time that can be used to do what the interview is really about. Namely, to evaluate the assumptions a coach made based on the descriptions and to see non-verbal communication candidates show while they tell about their activities. Non-verbal communication is important for a coach because it helps to determine the importance of a specific activity in relation to the other activities. For example, as one interviewee stated (translated from Dutch):

“When a candidate tells about their activity and you see their eyes start to shine. Often it is a signal that it certainly is incorporated in the candidate MAP.” (SIMA-coach, conducted 50+ assessments)

The sequential design of the current activity form makes it possible to describe activities without reflecting upon the instructions and guidelines after a description is written. candidates tend to read the instructions only before describing activities. Because candidates have a large timespan to describe their activities (two to three

weeks), they will forget several instructions. While describing, a candidate, is not faced with the instructions anymore. This could result in a description that is less complete and therefore less usable. In addition, the document does not state all parameters that were gathered by the interviews. Parameters #20, #21, #22, #23, #24, stated in Table 4, are not facilitated by the current activity document. Therefore, candidates are not able to imagine what is expected to be incorporated in a description. These difficulties can influence the quality of a description and indirectly the quality of the assessments’ outcome (MAP), as one interviewee stated (translated from Dutch):

“SIMA is, more than other methods, a method from which one can extract as many as one puts in. If you do not introduce, tell or show many information about yourself and a coach does not know how to trigger you, then you won’t get much out of SIMA”. (SIMA-coach, conducted 3 assessments)

Figure 2: Current activity writing flow

The analysis provided sufficient information to form parameters that should be incorporated in a description. Therefore, we define the quality of a description as:

The degree in which a description contains the activity quality parameters stated in Table 4.

A prototype that supports a candidate in writing should add the following requirements: (1) Active support for the important parameters should be added to improve the presence of them in a description. (2) Instead of presenting instructions in advance they should be incorporated in the process of writing to confront candidates at the moment of writing. (3) The missing parameters from the current activity form should be added.

4. PROTOTYPE

It was shown that the difficulties in writing about activities experience and the absence of active support relates to the quality of descriptions and the assessments outcome. To test if this problem could be solved with a support tool for the first phase of a SIMA- assessment, a prototype that supports a candidate in the process of writing was build and evaluated. All parameters, except for #7, #10, #12, #15 and #16, that are stated in Table 4 were covered by the prototype. These were not addressed because they were only mentioned by one interviewee in the requirement analysis.

(6)

4.1 Process

The prototype’s writing process differed in two ways. First, the process guided a user through different content, which was based on the five questions from the current activity form. In addition, six tasks were added. Table 6 states all the prototype’s tasks and their relating quality parameters. Tasks #1, #2, #5, #6, #7 and #8 were added to cover quality parameters that were not covered by the current activity form. The prototype’s complete writing process is stated in Figure 3.

Tasks #7 and #8 are related to each other and both cover an important parameter, therefore they are referred to as the relations enhancement in the remainder part of this paper. Second, the process facilitated active support on important parameters by the use of feedback loops. A user was presented with feedback that triggered to reflect upon his or her own descriptions (Anseel, Lievens, & Schollaert, 2009) in tasks #10, #11 and #12 from Table 6. The feedback loops are referred to as the feedback enhancements in the remainder part of this paper.

Figure 3: Prototype activity writing flow

4.2 Interface

The prototype’s interface used intrinsic support (Gery, 1995, p. 51) that integrated instructions in the prototype’s interface structure and content. This was accomplished by the use of a task-centered workspace. As a result, each task presented specific instructions at the actual moment of writing and had to reduce the probability for forgetting instructions. At any moment, only one task was presented on screen. Each task had its own task description that stated the instruction. Figure 4 shows the interface of the task-centered workspace and was the default interface for each task. As described before, both tasks (#7, #8 stated in Table 6) from the relations enhancement covered important parameters, therefore both interfaces were specifically designed and differed from the default task interface (see Figure 4). Task #7’s interface (see Figure 5) was designed to facilitate a user to explicitly state if other persons were involved in the activity. If other persons were involved the user was presented with task #8’s interface (described in Figure 6), which gave the user the ability to add involved persons.

The feedback enhancement was used on tasks #10, #11 and #12. Initially these tasks were presented in the default task interface from Figure 4. After completing the task, a user was presented with the control questions from the feedback enhancement. The state in which these control questions were shown is showed in Figure 7. Only after these questions were presented a user could proceed to the next task.

Table 6: Prototype tasks

# Task Parameter

1 Remember an activity in which you experienced lots of success and fun. Were you able to think of an activity? 2 Remember an activity in which you experienced lots of

success and fun by answering some of the questions stated below for yourself.

3* Give the activity a name

4* Describe the activity in a global way. Do this with a few sentences.

5 Describe the situation. Think about where and when the activity took place and how long it lasted.

#20 6 Describe what your share in the activity was and which

role you played.

#21 #24 7 Were other persons involved? #22 8 Who were involved?

State their name, and their role towards yourself.

#23 9* Describe how you ended up to help carry out this

activity, how did your participation established. Was this established by yourself or by someone else?

#17 #18 10* Describe as specific and detailed as possible how you

approached the activity, what exactly did you do, what did you say and what were your thoughts.

#11 #13 #14 #19 11* Describe as specific and detailed as possible what made

this activity enjoyable for you? What exactly gave you a feeling of satisfaction?

#08 #09 12* Below your activity is stated, read it before continuing.

Do you want to adjust something? Please navigate back. #01 #04 #05 #06 * important parameter 6

(7)

Figure 4: Task-centered work space

Figure 5: Relation enhancement task #7

Figure 6: Relation enhancement task #8

Figure 7: Feedback enhancement

5. PROTOTYPE EVALUATION

5.1 Experiment

The experiment was a post-test-only randomized controlled trial (Robson, 2011) and was conducted online. The goal was to prove that a computer system, which assists a candidate to describe activities, could improve the quality of an activity description.

5.1.1 Set-Up

Based on the group participants were assigned to, it presented a different version of the user interface. Participants from both control and treatment groups wrote their activities by a process of two sequential phases: (1) reading instructions; (2) describing their activity. Participants assigned to the control group described their activities in digitalized version of the current activity form and followed the process illustrated in Figure 2. The control version was digitalized for easy distribution and data collection. Participants that were assigned to the treatment group used the prototype and followed the process stated in Figure 3.

5.1.2 Participants

Participants were recruited by convenience sampling (Robson, 2011) through the personal (family, friends) and professional (colleagues and business relations) networks of the researcher. In total 205 persons were invited by e-mail. 47 persons participated in the experiment. Four participants did not complete the experiment and were excluded. The average age of the resulting 43 participants was 38.49 (SD = 12.24, MIN = 22, MAX = 63) years, 26 of them were male 16 female. Eight participants already had experience with SIMA. Because they were evenly distributed over the control and treatment groups (4 vs. 4) no corrective actions were taken.

5.1.3 Procedure

Data was collected in a period of 16 days. The participants were systematically assigned to the control or treatment group by the system. This was based on the number of participants who were already assigned to the groups. The first participant was assigned randomly to the control or treatment group. All following participants were assigned to the group with the least amount of participants. Once participants were assigned to one of the groups, they were asked to enter their demographic characteristics. Next, they had to read instructions and afterwards they had to describe 7

(8)

their activity of choice. The collected descriptions were evaluated on quality by three SIMA-experts. All three of them were from the expert group that had been interviewed for the requirement analysis. The descriptions were distributed in a random order, so that the coaches did not know which description was created with which tool.

5.1.4 Coding

The 24 parameters that were established from the requirement analysis were translated into a coding scheme that is stated in Appendix A: Activity coding scheme. Each parameter was represented by a question in the coding scheme. Questions were answered on a Guttmann scale, because “both Thurstone and Likert scales may contain statements which concern a variety of dimensions relating to the attitude of an expert” (Robson, 2011, p. 307). Each positive answer resulted in one point; each negative answer resulted in zero points. All three assessors were provided with a document for each activity they had to review. A single document included: (1) the description of an activity, which was a concatenation of all the results of the questions/tasks in chronological order; (2) the coding scheme that was constructed with the results of the requirement analysis.

Table 7: Dependent variables

Dependent variable Calculation: SUM of parameters

Overall quality #01 through #24 Overall quality (only important parameters) #01, #04, #05, #06, #08, #09, #11, #13, #14, #21, #22, #23 General quality #01 through #07 Central motives quality #08 through #10 Motivating abilities quality #11 through #14 Motivating subjects quality #15 through #16 Motivating circumstances quality #17 through #20 Motivating relations quality #21 through #24 Overview enhancement quality #1, #4, #5, #6 Satisfaction enhancement quality #08, #09 Proceedings enhancement quality #11, #13, #14, #19 Relations enhancement quality #22, #23

5.1.5 Data Analysis

The data were analyzed using a top-down analysis on quality by comparing means between the control and treatment groups. Table 7 states how all dependent variables were calculated. The overall quality variable breaks down into six sub-variables, one for general quality and one for each of the five aspects from a MAP. All tests used the hypotheses:

𝐻𝐻0: 𝜇𝜇𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐= 𝜇𝜇𝑐𝑐𝑐𝑐𝑡𝑡𝑡𝑡𝑐𝑐𝑡𝑡𝑡𝑡𝑐𝑐𝑐𝑐

𝐻𝐻1: 𝜇𝜇𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐≠ 𝜇𝜇𝑐𝑐𝑐𝑐𝑡𝑡𝑡𝑡𝑐𝑐𝑡𝑡𝑡𝑡𝑐𝑐𝑐𝑐

5.2 Results

The results presented in this section address the differences in quality of descriptions between the control and treatment group. An overview of all results is presented in Table 8.

Preliminary analysis identified an outlier (a single activity description). Checking this outlier revealed that it was not assessed as intended, therefore it was excluded from the sample. Other descriptions of the same assessor were assessed as intended.

5.2.1 Overall quality

A Shapiro-Wilk test showed that overall quality was normally distributed; therefore, an independent sample T-test was performed. On average the treatment group (M = 19.60, SD = 3.32) performed better than the control group (M = 17.32, SD = 5.12). The difference was not significant t(40) = -1.69, p = .098; and represents a small sized effect r = .26. In addition, only the important parameters were compared. A Shapiro-Wilk test showed that overall quality (with only important parameters) was not normally distributed, therefore it was tested with a Mann Whitney-U test (Field, 2005). On average the treatment group (M = 10.20, SD = 1.91) performed better than the control group (M = 8.73, SD = 2.55). The difference was significant U = 142.50 p = .048 and represents a small sized effect r = .31.

When investigating the results of the sub-variables of overall quality and the enhancement-variables, we see some significant differences between the control (n = 22) and treatment (n = 20) groups. The results are presented in the following paragraphs.

5.2.2 Sub-variables

A Shapiro-Wilk test showed that all quality sub-variables were not normally distributed, therefore these variables were tested with a Mann Whitney-U test (Field, 2005).

Investigating the quality scores on general quality, motivating subjects quality and motivating circumstances quality revealed that, on average, the treatment group performed better than the control group on all three variables. The quality scores on motivating abilities quality revealed that, on average, the control group performed better than the treatment group. However, these differences were not significant.

Significant differences emerged in the quality scores on central motives quality and motivating relations quality. On average, the treatment group (M = 2.80, SD = .00) performed better than the control group (M = 2.05, SD = 1.21) on central motives quality. This difference was significant U = 150.00, p = .35 and represents a small sized effect r = .33. The treatment group (M = 3.80, SD = .52) also performed better on motivating relations quality than the control group (M = 2.91, SD = .92). This difference was also significant U = 105.00 p = .001 and represents a medium sized effect r = .51.

5.2.3 Enhancement-variables

A Shapiro-Wilk test showed that all enhancement variables were not normally distributed, therefore these variables were tested with a Mann Whitney-U test (Field, 2005, p. 306).

Investigating the quality scores on overview enhancement quality and proceedings enhancement quality revealed that, on average, the treatment group performed better than the control group on both variables. However, these differences were not significant. Significant differences emerged in the quality scores on satisfaction enhancement quality and relations enhancement quality. On average the treatment group (M = 1.95, SD = .22) scored better than the control group (M = 1.45, SD = .86) on satisfaction enhancement quality. This difference was significant U = 158.50, p = .024 and represents a small sized effect r = .34. The treatment group (M = 1.90, SD = .31) also performed better on relations enhancement quality than the control group (M = 1.45, SD = .51). This difference was also significant U = 122.00, p = .003 and represents a medium sized effect r = .47.

(9)

Table 8: Effects in experiment

Variable Group of Experiment

Control (N = 22) Treatment (N = 20) Independent sample T-test

M SD M SD t(40) p r

Overall quality 17.32 5.12 19.60 3.32 -1.69 .098 .26 Mann Whitney-U

U p r

Overall quality (only important parameters) * 8.73 2.55 10.20 1,91 142.50 .048 .31 General quality 5.32 1.62 5.50 1.47 207.50 .746 .05 Central motives quality * 2.05 1.21 2.80 .00 150.00 .035 .33 Motivating abilities quality 2.50 1.60 2.45 1.54 215.50 .906 .02 Motivating subjects quality 1.68 .72 1.85 .00 201.00 .431 .12 Motivating circumstances quality 2.86 .99 3.20 1.06 170.50 .187 .20 Motivating relations quality * 2.91 .92 3.80 .52 105.00 .001 .51 Overview enhancement quality 3.32 .84 3.50 .89 183.50 .295 .16 Satisfaction enhancement quality * 1.45 .86 1.95 .22 158.50 .024 .34 Proceedings enhancement quality 2.41 1.40 2.50 1.40 209.00 .775 .04 Relations enhancement quality * 1.45 .51 1.90 .31 122.00 .003 .47 * is significant at the .05 level

5.2.4 Individual parameters

To explain the origin of these significant results, all 24 parameters of quality were examined. Fischer’s exact tests were performed on each parameter. The effects are stated in Table 9. Tests on parameters #09, #22 and #23 indicated that participants in the treatment group performed significantly better than the participants in the control group with a significance of respectively p = .022, p = .004, p = .007. No other parameters differed significantly. Tests on parameters #01, #03, #04, #06, #12, #13, #14, #17, #18, #21 and #24 resulted in a significance of p = 1.000 and showed that there was no difference in performance between the control and treatment groups.

Table 9: Effects on individual parameters

# Parameter Sig. # Parameter Sig.

01 Active behavior * 1.000 13 Skills performed * 1.000 02 Specific behavior .767 14 Approach of operations 1.000 03 Specific moment 1.000 15 Subjects .608 04 Positive * 1.000 16 Nouns indication subjects .665 05 First-person perspective * .460 17 Establishment 1.000 06 Single activity * 1.000 18 Who established 1.000 07 No analysis on own behavior .665 19 Thoughts .374 08 Enjoyment * .096 20 Situation .284 09 Feeling of satisfaction * .022 21 Own role * 1.000 10 Success .091 22 Other persons involved * .004 11 Operations performed * .741 23 Roles of involved persons * .007 12 Verbs indicating operations 1.000 24 Share in activity 1.000

* important parameter

5.3 Discussion

The results show that the treatment group scored higher than the control group on 11 out of 12 variables. This trend indicates that the improvement in performance was spread over all the parameters and not caused by only a few. To determine if it can be assumed that the prototype is responsible for this trend, the results are discussed below.

5.3.1 Overall quality

Comparing the means of both groups on overall quality revealed that the treatment group scored higher and thus performed better, the difference between the groups was not significant, and therefore

we cannot reject H0. This result defers from what was expected,

however it was not surprising because the overall quality was measured by 24 parameters, the important parameters were specifically enhanced, but all parameters were equally weighted (no weighting was applied). Therefore, a less important parameter, that did not have any relation with an enhancement, was able to abolish the effect of one of the enhancements. To eliminate this effect overall quality was tested again but this time only the important parameters were used. This revealed that the treatment

group performed significantly better. Therefore we can reject H0.

This result showed that less important parameters did interfere with more important ones. To improve the validity of this study weighting should be added to the calculation of the quality scores.

5.3.2 Sub-variables

To break down the difference in performance, the sub-variables of overall quality (see Table 7) were tested. One sub-variable was constructed from parameters that were related to general properties of a description, the other five sub-were named after the five aspects from a MAP and consisted of parameters that are used by coaches to extract information for a certain MAP-aspect. On general quality, motivating subjects quality, motivating circumstances and motivating abilities quality no significant results emerged, therefore we cannot reject H0. The treatment group scored

higher and thus performed better on central motives quality and motivating relations quality. These differences were significant and therefore we can reject H0 for both. Yet, what does this finding tell

us?

General quality existed for parameters that were instructed to the

participant in both control and treatment group. The instructions differed in length, in which the treatment group had shorter instructions. Because both groups scored about the same, it is likely that the shorter instructions are as effective as the longer and therefore useful. Based on the results from the sub-variables it can be stated that the prototype enhances the performance of 9

(10)

participants on central motives quality and motivating relations quality. Nevertheless, how this difference in performance is caused cannot be determined based on these results, it only gave direction in where the differences derive from. Therefore, we can argue that measuring these variables did not give valuable meaning towards this research. The same direction was found if only the enhancement variables were exanimated.

5.3.3 Enhancement-variables

Measuring the enhancement variables revealed no significant differences on the overview enhancement quality and proceedings enhancement quality. Both of these enhancements were instances of the feedback enhancement. On satisfaction enhancement quality and relations enhancement quality significant differences emerged. Respectively these enhancements were instances of the feedback and relations enhancement.

Out of the three feedback enhancements only one caused a significant difference, the other two enhancements did not even show a marginal difference. The feedback enhancements differed in content, each of them had task specific control questions. Two possible causes could cause this result. (1) It could be caused by inadequate control questions. To examine which control questions work the best, another type of experiment would be needed. An experiment in which all participants describe an activity and afterwards they will be presented with feedback questions (different for each treatment group) and asked to revise their description. Comparing the modifications participants made could give insights in what feedback questions work best. This insight was gained after the experiment was conducted. Evaluating control questions requires another experiment set-up and was not part of the scope of this experiment; therefore, it has not been conducted. (2) The relatively small sample size could presume that this study is underpowered; therefore, p-values are not as trustworthy. Repeating this experiment with a larger sample size should increase the statistical validity for this study.

On relation enhancement quality the treatment group scored an average of 1.90 with a maximum scale of 2.00. Because this differed significantly from the control group (M = 1.45) it can be implied that the relation enhancement is useful and should be preserved in future versions of the prototype. However, it cannot be stated that the significant differences resulted from the enhancements. As the assessors only were asked to indicate whether the parameters were described somewhere in the description, it was possible that a participant scored positive one of the parameters while describing it in a part of the prototype that was not enhanced. Additional research is needed to determine if the enhancements are accountable for the difference. This can be done by having the assessors cross-examine each other’s descriptions. Only then, we can state the usefulness of the enhancements.

5.3.4 Individual parameters

Measuring the differences on individual parameters between the control and treatment groups revealed that only parameters #09, #22 and #23 differed significantly. These three parameters contribute the most to the performance improvement and should be kept in future versions of the prototype. Therefore, these parameters should be further investigated. Tests on parameters #05, #07, #08, #10, #15, #16, #19 and #20 showed not significant differences in which the treatment group performed better. Therefore, future investigation in how to improve performance on these should be done. The outcome also showed that on parameters #01, #03, #04, #06, #12, #13, #14, #17, #18, #21 and #24 no difference emerged at all. Further investigating these results showed that the control

group performed equally well, therefore these parameters should not be further investigated. Tests on parameters #02 and #11 revealed not significant differences in which the control group performed better. Therefore, these parameters should not be further investigated.

6. GENERAL DISCUSSION

This research was explorative and focused on determining the parameters for an interactive online SIMA-assessment to improve the quality of describing personal activities. Expert interviews were held to gain insights about the process of an SIMA-assessment, the quality of a description of an activity and problems candidates experience while participating in an assessment. These interviews were only held with coaches. They were valid interviewees to give their perspective on the process and quality of an assessment and their responses on these topics were largely similar. However, it could be argued that gaining perspectives about problems that candidates experience by interviewing coaches, gives a biased view. These are biased by the subjectivity of a coach. Repeating the interviews with candidates could improve the validity and would give a more accurate view on problems experienced.

An attempt was made to incorporate all 24 parameters in the prototype, with no distinction being made between the importance of parameters. Because of the large number and the lack of focus on important parameters, it was difficult to precisely identify the differences between the current activity form and the prototype. Focus on the important parameters would have led to a simpler and clearer experiment, which probably resulted in conclusions that are more meaningful.

Some questions in the coding scheme were not distinctive from others. For example questions #09 and #10 (see Appendix A: Activity coding scheme), respectively ask, feeling of satisfaction and success. How these differ was not stated and could have led up to mix-ups that could weaken the validity of the findings. Three assessors coded the descriptions. In this approach we have to be aware that a ‘researcher-as-instrument’ approach emphasizes the potential for bias(Robson, 2011, p. 157). Each description was only coded by one assessor and only contains the subjectivity of one of the three assessors. Therefore, we could doubt the inter-observer reliability. Repeating the coding of all descriptions by all assessors, taking the average of three observations for each description should improve the inter-observer reliability. Due to time constraints and the availability of the assessors this cross examination was not conducted.

Results from the experiment could not be simply generalized over a larger population, because most participants were male and aged between 20 and 29. Repeating the experiment with a sample that is more similar with the target population would increase the external validity.

7. CONCLUSION & FUTURE WORK

This study was conducted to explore and define the parameters that relate to the performance on quality in relation to writing about SIMA life experiences. These writings are essential to the outcome of SIMA.

Building a prototype with an improved process of writing, guided questions and an enhanced user interface explored how SIMA-candidates could be supported to improve the quality of an activity description. Subsets of the 24 parameters were specifically enhanced and an experiment was held to evaluate the prototype. This revealed that participants that used the prototype performed better on 11 of the 12 dependent variables. Although the

(11)

performance only differed significant on 5 of the 12 variables, it can be assumed that that the prototype improves the quality of describing activities.

Measuring the differences on individual parameters revealed that not all parameters, not even those being identified as important, contribute equally to the performance. Only performance on parameters #09, #22 and #23 (see Table 4) differed significantly. These three parameters contribute the most to the performance improvement and should be kept in future versions of the prototype. Parameters #01, #03, #04, #06, #12, #13, #14, #17, #18, #21 and #24 (see Table 4) showed no difference in performance at all and should not be further investigated. Parameters #02 and #11 showed not significant differences in which the control group performed better and should not be further investigated. Parameters #05, #07, #08, #10, #15, #16, #19 and #20 (see Table 4) showed not significant differences in which the treatment group performed better and should be further investigated.

Future work should firstly focus on improving the performance on parameters #05, #07, #08, #10, #15, #16, #19 and #20, because these showed improved performance but the improvement was not significant. Secondly, because less important parameters interfered with important ones, future work should focus on adding weights to individual parameters to give important ones more influence on performance. Thirdly, future work should focus on the content (control questions) stated in the feedback enhancements. This is important because the contents determine the usefulness of the feedback enhancement.

8. ACKNOWLEDGMENTS

This project consumed a large amount of work, time and dedication. Still, implementation would not have been possible if I did not have a support of many individuals. Therefore, I would like to extend my sincere gratitude to all of them. First, I am thankful to my supervisor: Dr. Frank Nack for providing guidance and Dr. Andre Nusselder for being the second reader. Second, I am grateful to Gustav Elmont, Jarst Frans and Remco Meijer for coding my data. Third, I am thankful Marcel van der Sluis, for providing support on planning and managing the amount of work. Fourth, I am thankful to Jorrit de Waard and Pieter Brons, for being discussion partners while reflecting upon my work. Fifth, I am grateful to all interviewees and participants for participating in my research. Last, I wish to express my sincere thanks to my family and friends for supporting me.

9. REFERENCES

Anseel, F., Lievens, F., & Schollaert, E. (2009). Reflection as a strategy to enhance task performance after feedback. Organizational Behavior and Human Decision Processes, 110(1), 23–35. doi:10.1016/j.obhdp.2009.05.003

Blumberg, M., & Pringle, C. (1982). The missing opportunity in organizational research: Some implications for a theory of work performance. Academy of Management Review, 7(4), 560–569. Retrieved from

http://amr.aom.org/content/7/4/560.short Fernandez, S., & Moldogaziev, T. (2013). Employee

Empowerment, Employee Attitudes, and Performance: Testing a Causal Model. Public Administration Review, 73(3), 490–506. doi:10.1111/puar.12049

Field, A. (2005). Discovering Statistics Using SPSS. (D. B. Wright, Ed.) (2nd ed.). Sage Publications.

Gery, G. (1995). Attributes and Behaviors of Performance-Centered Systems. Performance Improvement Quarterly, 8(1), 47–93. doi:10.1111/j.1937-8327.1995.tb00661.x Howarth, W. L. (1974). Sime Principles of Autobiography. New

Literary History, 5(2), 363–381.

Klimas, C. (2009). Twine. Retrieved January 26, 2015, from http://twinery.org

Meehan, J. (1977). TALE-SPIN, An Interactive Program that Writes Stories. IJCAI. Retrieved from

http://www.ijcai.org/Past Proceedings/IJCAI-77-VOL1/PDF/013.pdf

Nelson, G. (2009). Inform7. Retrieved January 26, 2015, from http://inform7.com

Norhafizah, T., Zakaria, T., Aziz, M. J. A., Nor Rizan, T., & Maasum, T. M. (2010). Computer-assisted composing process in business writing. Information …, 3, 1199–1203. Retrieved from

http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=55616 23

Pérez y Pérez, R., & Sharples, M. (2004). Three computer-based models of storytelling: BRUTUS, MINSTREL and MEXICA. Knowledge-Based Systems, 17, 15–29. doi:10.1016/S0950-7051(03)00048-0

Philips, A., & Kessel, A. van. (2013). De kracht van motivatie: ontdek de weg naar je ideale loopbaan (2nd ed.). Uitgeverij Ten Have.

Porter, L. W., Bigley, G. A., & Steers, R. M. (1975). Motivation and work behaviour.

Robson, C. (2011). Real World Research (3th ed.). John Wiley & Sons Ltd.

Ryan, R. M., & Deci, E. L. (2000a). Intrinsic and Extrinsic Motivations: Classic Definitions and New Directions. Contemporary Educational Psychology, 25(1), 54–67. doi:10.1006/ceps.1999.1020

Ryan, R. M., & Deci, E. L. (2000b). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist, 55(1), 68–78. Retrieved from

http://psycnet.apa.org/journals/amp/55/1/68/

Small Picture. (2014). Fargo. Retrieved January 26, 2015, from http://fargo.io

Write Brothers. (1994). Dramatica. Retrieved January 26, 2015, from http://dramatica.org

(12)

APPENDICES

Appendix A: Activity coding scheme

Meta data

Beoordelaar: [Toegewezen beoordelaar]

Kenmerk van de activiteit: [Uniek kenmerk]

Procedure: 1. Lees de beschrijving in zijn geheel door.

2. Lees de eerste vraag. 3. Bestudeer de beschrijving 4. Beantwoord de vraag.

5. Herhaal stap 2 - 4 voor de overige vragen. 6. Controleer of alle vragen beantwoord zijn. 7. Onderteken, ter bevestiging, het formulier.

# Algemeen Ja Nee

01 Wordt er actief gedrag beschreven? 02 Is het beschreven gedrag specifiek?

03 Gaat de activiteit over een specifiek moment? 04 Is de activiteit positief beschreven?

05 Is de activiteit in zijn geheel vanuit de ik-vorm geschreven? 06 Wordt er één activiteit beschreven?

07 Wordt de activiteit zonder analyses beschreven?

# Centrale drijfveren Ja Nee

08 Wordt het ervaren ”plezier” concreet beschreven?

09 Wordt het ervaren ”gevoel van tevredenheid” concreet benoemd? 10 Wordt het ervaren “succes” concreet beschreven?

# Motiverende vaardigheden Ja Nee

11 Worden de uitgevoerde handelingen beschreven? Dit kan ook denkwerk zijn 12 Zijn de vaardigheden af te leiden uit de gebruikte werkwoorden?

13 Worden de vaardigheden die de persoon inzet beschreven?

(13)

14 Wordt er beschreven hoe de activiteit aangepakt werd?

# Motiverende onderwerpen Ja Nee

15 Worden de onderwerpen van de activiteit beschreven?

16 Zijn de onderwerpen af te leiden uit de gebruikte zelfstandig naamwoorden?

# Motiverende omstandigheden Ja Nee

17 Wordt er beschreven hoe de activiteit tot stand kwam? 18 Wordt er beschreven welke persoon de activiteit initieerde? 19 Worden er gedachtes beschreven?

20 Wordt er een situatieschets beschreven?

# Motiverende (werk)verhoudingen Ja Nee

21 Wordt de eigen rol beschreven?

22 Wordt er beschreven of er andere personen wel/niet betrokken waren? 23 Wordt de rol ten opzichte van anderen beschreven?

24 Wordt het aandeel in de activiteit beschreven?

Handtekening beoordelaar: Datum: ………

……….

Referenties

GERELATEERDE DOCUMENTEN

Edited by Eloína Miyares Bermúdez, the Diccionario Básico Escolar has been planned, prepared, compiled and actively promoted by collaborators from the Centro de

The objectives set for the study were to determine their experience of their current pregnancy; to determine their knowledge of contraceptives; and to explore their

The consequence of the moralization of military robots is that the decision of a cubicle warrior is not the result of moral reflection, but is mainly determined or even enforced by

Integrated development, planning; financial planning (budgeting); Service delivery and budget implementation plan (SDBIP); strategic planning; Emfuleni Local Municipality.. The

Samevattend kan dus gese word dat elke bekwame skoolhoof beslis sommige pligte, gesag en verantwoordelikheid aan sy personeel moet delegeer om hom sodoende kans

While in some of the assessed species consistent bacterial communities have been observed in both field and laboratory collected populations as well as in insects reared on

3.2 Replace both consumables on a preventive basis or on failure of the Teflon strips In this section an optimal preventive interruption instant, X (measured in number of

Tussen 3 en 4 december 2008 werd door de Archeologische Dienst Antwerpse Kempen (AdAK) een archeologische prospectie met ingreep in de bodem uitgevoerd binnen het plangebied van