• No results found

The relationship between General Mental Ability, job knowledge, and job performance : an empirical study among Hungarian nurses

N/A
N/A
Protected

Academic year: 2021

Share "The relationship between General Mental Ability, job knowledge, and job performance : an empirical study among Hungarian nurses"

Copied!
68
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

1

The relationship between General Mental Ability, job

knowledge, and job performance

An empirical study among Hungarian nurses

Author: Gergely Wischy

Student number: 10839216 Date of submission: 13-8-2015

Programme: Business Administration – Entrepreneurship and Innovation track

Institution: University of Amsterdam, Amsterdam Business School Supervisor: Dr. Stefan T. Mol

(2)

2

Author note

The data used in this study had been collected together with two PhD candidates, Sofija Pajic (University of Amsterdam) and Ádám Keszler (University of Debrecen), with the supervision of Dr. Gábor Kismihók (University of Amsterdam).

(3)

3

Statement of originality

This document is written by Student Gergely Wischy who declares to take full

responsibility for the contents of this document.

I declare that the text and the work presented in this document is original and

that no sources other than those mentioned in the text and its references have

been used in creating it.

The Faculty of Economics and Business is responsible solely for the supervision of

completion of the work, not for the contents.

(4)

4

Abstract

This study aimed to examine the predictive validity of the innovative EU-funded Med-Assess project’s job knowledge test for nurses and, therefore, gain insights if the test has the potential to be commercialized. To achieve that, the study examined the influence of job knowledge and General Mental Ability (GMA) on job performance in the context of nurses. Data was collected from 108 nurses of a Hungarian hospital. GMA was assessed with a Raven’s matrices test, job knowledge was assessed with Med-Assess test while job performance was assessed with the nurses’ immediate supervisors’ performance ratings. Initial analyses revealed no correlation between GMA and job performance nor between job knowledge and job performance, although after corrections for personal liking and range restriction were made results showed that job knowledge has a higher predictive validity than General Mental Ability. Furthermore, results showed that applicants perceive the job knowledge test more face valid. Evidence for job knowledge’s mediating effect between GMA and job performance and for the tests’ incremental validity over each other was not found. Implications of these findings are discussed.

(5)

5

Table of contents

1. Introduction ... 8

2. Review of core literature: ... 10

2.1 Personnel selection ... 10

2.2 Selecting which personnel assessment to use ... 12

2.2.1 Reliability ... 12

2.2.2 Validity ... 13

2.2.2.1 Predictive (or Criterion) validity ... 13

2.2.2.2 Content validity ... 14

2.2.2.3 Construct validity ... 15

2.2.3 Fairness/Face validity ... 15

2.2.4 Acceptability... 16

2.3 General Mental Ability test for personnel selection ... 16

2.3.1 Evaluating GMA tests ... 19

2.3.1.1 Validity ... 19 2.3.1.2 Fairness ... 19 2.3.1.3 Acceptability... 20 2.3.1.4 Cost effectiveness ... 20 2.3.1.5 Easiness to use ... 20 2.4 Knowledge economy ... 21

2.4.2 The importance of knowledge to organizations ... 22

2.5 Job knowledge ... 24

2.5.1 Job knowledge test ... 24

2.5.1.1 Job Knowledge Test Development ... 26

2.5.2 Evaluating job knowledge tests ... 29

2.5.2.1 Validity ... 29 2.5.2.2 Fairness ... 30 2.5.2.3 Acceptability... 31 2.5.2.4 Cost-effectiveness ... 32 2.5.2.5 Easiness to use ... 33 2.5.2.6 Fakeability ... 33

(6)

6

2.7 The OntoHR project ... 34

2.8 The Med-Assess project ... 35

2.9 Hypothesis development ... 37 3. Methods ... 39 3.1 Sample ... 39 3.2 Procedure ... 40 3.3 Measurement of variables ... 41 3.4 Analytical strategy ... 44 3.4.1 Recoding ... 44 3.4.2 Missing values ... 45 3.4.3 Reliability ... 45

3.4.4 Computing Scale Means ... 46

3.4.5 Skewness and Kurtosis ... 46

4. Results ... 48 4.1 Correlation analysis ... 48 4.2 Dependent t-test ... 51 4.3 Mediation analysis ... 52 4.4 Hierarchical regression ... 54 5. Discussion ... 58

5.1 Theoretical and practical implications ... 58

5.2 Limitations ... 59

5.3 Further research ... 60

(7)

7

List of tables and figures

Figure 1 Research model... 39

Table 1. Means, Standard Deviations, Skewness and Kurtosis... 46

Table 2: Means, Standard Deviations, Correlations and Reliabilities ... 47

Table 3: Partial correlations of GMA, Job Knowledge and Job Performance, controlled for Personal Liking ... 48

Figure 2: Thorndike's case 2 ... 49

Table 4: Corrected correlations of Job Knowledge and Job Performance ... 50

Table 5: Corrected correlations of GMA and Job Performance ... 51

Table 6: Statistics of indirect effect of GMA test score on Job Performance ... 54

Table 7: Statistics of direct and total effect of GMA on Job Performance ... 54

Table 8: 1. Hierarchical multiple regression ... 55

(8)

8

1. Introduction

Employee job performance is a major determinant for a firm in developing a competitive advantage over its rivals. Therefore, organizations are getting more and more concerned about who to hire (Bowen and Ostoff, 2004). To hire the candidate who will be the most successful, various tests are being used to predict the candidates’ future job performance. Currently, selection processes mostly focus on educational background, personality tests, and General Mental Ability. General Mental Ability tests are commonly believed to be the best predictor of job performance, correspondingly it is widely used for selection purposes (Schmidt and Hunter, 1998; Chamorro-Premuzic and Furnham, 2010). However, research (Hunter, 1986; Dye, Reck and McDaniel, 1993) shows that the best predictor of job performance is job knowledge and GMA is an effective predictor because it predicts job knowledge. Despite these finding there is little research taking place about job knowledge lately, there is only limited empirical evidence available and job knowledge tests have not become popularized.

The OntoHR consortium aimed to change that. With the support of the European Commission’s Lifelong Learning Programme, their goal was to build a platform and a method to construct customizable and adaptive job knowledge tests (Kismihók and Mol, 2008). Based on the success of the OntoHR project, a second project, the EU-funded Med-Assess Innovation project was carried out to transfer the OntoHR system from the IT to the health domain, which resulted in a job knowledge test for nurses (www.med-assess.eu). The partners of the Med-Assess project propose an innovative method of job knowledge test development by using

(9)

text-9

mining on job vacancies, which is believed to increase the predictive validity of such job knowledge tests (Kobayashi, Mol, Berkers and Kismihók, 2015).

The aim of this study is to investigate the relationship between General Mental Ability, job knowledge, and job performance. For managers in organizations, this can help to facilitate decisions as to which of these two assessment tools (or both) should they use. For academics, this will increase their understanding of how General Mental Ability and job knowledge affects job performance.

In order to reach a comprehensive conclusion, the rest of this study is structured as follows. The second chapter describes the current state of the art with respect to General Mental Ability, job knowledge, and, in particular, the use of such tests for personnel selection. Afterwards, chapter three outlines the data collection procedure and research method. The results based on the collected data are discussed in chapter four. Finally, the most important conclusions and implications of the results of this study are discussed in chapter five, together with the most important limitations and suggestions for further research.

(10)

10

2. Review of core literature: 2.1 Personnel selection

Due to the increasing level of global competition, organizations are getting more and more concerned about the job performance of their employees. The reason behind this is that employee performance is a major determinant of how successful an organization is in developing a competitive advantage over its rivals (Bowen and Ostroff, 2004). Therefore, it is a major objective of organizations to manage employee job performance. Organizational researchers typically propose that an individual’s job performance consist of two factors: the ability to perform the job and the effort that the individual puts into it (Patrick, Wright, Kacmar, McMahan and Deleeuw, 1995), with both factors being to some extent under the control of the organization. Selection and training are the organizational practices which target motivation and ability. The organization can select that employee with the required abilities or teach these abilities to an existing employee. The employee’s effort is the function of the firm’s various practices for motivating employees. However, these motivation practices assume that the employee possesses the abilities to perform the job. The aim of the motivational practices is to get the employees to use their abilities in a concerted and continuous manner. Selection is, therefore, critical to organizations since it ensures that employees have the ability to perform the job and it provides the basis for effective motivational practices (Gatewood, Feild and Barrick, 2010).

Muchinsky (2006) defines personnel selection as the process used to hire or promote individuals. Although, the term can be applied to every step of the process such as recruitment, selection, and hiring, its most common meaning focuses on the selection of employees. This

(11)

11

step is about separating selected prospects from rejected applicants with the purpose of hiring those individuals who will be the most successful and who will make the most valuable contribution to the organization.

The decision pertaining to which applicant to hire is usually based on some sort of assessment and prediction of future behavior. Wrong decisions could easily result in decreased productivity and problems for the organization. Effective personnel decisions are based on information permitting at least an implicit prediction that the person chosen will satisfy job requirements, supposedly better than others in the anticipated roles. The prediction is usually based on unknown or assumed attributes (traits) of the candidates. After the applicants are identified, the usual chain of events is as follows: (Guion, 2011)

1. Identifying the job relevant traits

2. Assessing candidates in terms of these selected traits

3. Predicting the probable future job performance or other outcomes of the decision whether hire or not

4. Deciding to hire or reject

The assessment should be relevant for the position and should be competently done. The desired characteristics can vary greatly between organizations and/or job roles. To make effective decisions, the truly important traits have to be known and selectors should not be distracted by the irrelevant ones. Most of the qualifying characteristics are abstract and difficult to assess (Guion, 2011).

(12)

12

2.2 Selecting which personnel assessment to use

According to Cook (2009), assessment methods need to be assessed according to six criteria. He states that a selection method should be reliable, it should give a consistent account of applicants. The assessment ought to be valid, selecting good applicants, while rejecting bad ones. A good selection method is fair, meaning that it complies with equal opportunities legislation and the assessment is related to the job. It should be acceptable: for organizations and applicants alike. Another criterion is that it should be cost-effective, saving more for the organization than it costs. Finally, a personal assessment should be easy-to-use, it should conveniently fit into the selection process.

Selection methods do not automatically meet all these criteria; actually few assessment methods meet all these criteria, so choosing an assessment method always entails a compromise (Cook, 2009).

2.2.1 Reliability

Reliability is the consistency in sets of measures and items. By other words, reliability is the extent to which a set of measurement is free from random-error variance. Reliability is a crucial criterion for personnel assessment, if a test is not reliable, then it cannot be valid. Nevertheless, evidence of reliability does not in itself provide sufficient evidence that the measure is a useful one. Whether systematic sources of variance are relevant to the purpose of measurement is still a very important question. To show that, is the matter of validity, which is the major consideration in test evaluation. Reliability imposes a ceiling on validity. Reliability can only be determined by a specific test, not on types of personnel assessments in general,

(13)

13

therefore it cannot be used to provide arguments to use a selection method or not (Guion and Highhouse, 2014).

2.2.2 Validity

There are multiple types of validity evidence. To decide which type is needed depends on how the assessment method is used in making an employment decision. For instance, when a personality test is intended to forecast the job success of applicants in a certain position, then evidence on predictive validity is needed to show scores on the personality test are related to the subsequent performance on the job. Another example is when a work sample test is designed to imitate the actual tasks performed on the job, then besides investigating the test’s predictive ability, a content validity approach might also be needed to establish that the content of the test matches the content of the job in a convincing way, as identified by a job analysis (Cook, 2009).

2.2.2.1 Predictive (or Criterion) validity

Predictive (or criterion-related) validity refers to the relationship between the performance on the assessment and performance on the job. Predictive validity is the most important criterion to consider in deciding whether to employ a particular selection method or not. If an assessment provides little useful information about how an individual will perform on the job, that tool does not have much value to the organization.

The correlation/validity coefficient is the most commonly used measure of predictive validity. The correlation coefficient can range in absolute value from -1.00 to 1.00. An absolute value of correlation of 1.00 indicates that two measures (test scores and job performance

(14)

14

ratings) are perfectly related. In that case, the assessment perfectly predicts the actual job performance of each applicant based on a single score. An absolute value of correlation of 0 indicates that the two measurements are completely unrelated. Squaring the correlation coefficient yields the percentage of explained variance in job performance. In practice, it is rare that the validity coefficients for a single assessment exceed .50. An assessment with .30 or higher validity is generally considered useful for most circumstances (Biddle, 2005).

2.2.2.2 Content validity

Content validity refers to whether a test looks plausible to experts. To ensure content validity, experts of the field should analyze the job, choose relevant questions and put together the test. Content validation derives from educational testing, where it is important to ask if the questions of the test covers the curriculum and to seek answers from subject matter experts. Content validation looks at test items as samples of things employees need to know, not as signs of what employees are like (Cook, 2009).

Content validation is useful for several purposes, it is credible to applicants and easily defendable since it ensures that test content is clearly related to the job. This type of validation does not require a large number of people, unlike criterion validation. However, content validation has some limitations too. It is only applicable to jobs with a limited number of fairly specific tasks. Since content valid tests require test takers to possess particular skills or knowledge, it is not suitable for selection where employees are supposed to acquire all the required knowledge after hiring and not before. Content validity is inferior to criterion validity, it is a necessary, but not sufficient criterion for job-specific tests. To ensure that the test really works, the organizations has to carry out predictive validation too (Cook, 2009).

(15)

15

2.2.2.3 Construct validity

Some selection methods seek to measure the degree of which an applicant possesses psychological traits called constructs, instead of directly testing or using other information to predict job success. Some examples of such constructs are: intelligence, leadership ability, verbal ability, and conscientiousness. Constructs considered necessary for successful performance of jobs are inferred from job behaviors and activities as summarized in job descriptions. The job specifications of job descriptions are the part that usually contains these activities (Stone, 1982). Construct validity has to be investigated whenever no criterion or universe of content is accepted as entirely adequate to define the quality to be measured (Cronbach and Meehl, 1955).

2.2.3 Fairness/Face validity

Face validity refers to the extent to which the examinees perceive the content of the selection process to be related to the content of the job. It refers to the transparency or relevance of the selection procedure as it appears to the applicants. In other words if the test looks like it measures what it is supposed to measure, the test can said to have face validity. Face validity is widely recognized as a significant dimension in test attitudes or reactions (Chan, Schmitt, DeShon, Clause and Delbridge 1997). Face validity is important, since it is an aspect of tests which is usually under the control of the test constructor and since face validity can be a significant determinant of other reactions to tests. Face validity is often contrasted with construct validity and content validity (Holden, 2010).

(16)

16

2.2.4 Acceptability

A selection method is acceptable to organizations if it complies with legislation and has practical value for the organization. The most important property of a personnel assessment method is predictive validity from the point of view of practical value. The predictive validity coefficient is directly proportional to the practical economic value (utility) of the assessment method (Schmidt, Hunter, McKenzie, & Muldrow, 1979). A hiring method with increased predictive validity leads to substantial increases in employee performance as measured in percentage increases in output, increased monetary value of output, and increased learning of job-related skills. The other determinants of the practical value of a selection method are the variability in performance, the selection ratio and the output of the employee. These determine the amount of practical value that can be gained by hiring a better performing candidate (Schmidt and Hunter, 1998).

2.3 General Mental Ability test for personnel selection

Cognitive ability has been found to be the best (most valid) general predictor of performance across a variety of jobs (Schmidt and Hunter 1998).Nowadays one of the most widely used selection method is the intelligence test. It is recognized as the most effective assessment method (Chamorro-Premuzic and Furnham, 2010).

According to Schmidt (2009) “intelligence is the ability to grasp and reason correctly

with abstractions (concepts) and solve problems” (p. 4). He also defines intelligence as the

ability to learn. People with higher intelligence are able to learn more and to learning it more rapidly. This is especially true when the learning material is more complex. In psychology and in

(17)

17

the personnel selection literature intelligence is often referred to as General Mental Ability (GMA) or (general) cognitive ability.

General Mental Ability predicts many important life outcomes such as job performance: performance in school, amount of education obtained, rate of promotion on the job, ultimate job level attained and income (Schmidt, 2009). General Mental Ability is even involved in everyday activities, such as shopping, driving, and paying bills (Brody, 1992). There is no other single trait that predicts so many important real world outcomes so well as GMA. Due to this, General Mental Ability is the most important trait or construct in all of psychology, and the most “successful” trait in applied psychology (Schmidt, 2009).

The main reason that more intelligent people perform better in their jobs is that they acquire more job knowledge and they acquire it faster. Job knowledge is the major determinant of job performance and not General Mental Ability (Hunter, 1983). People who know how to perform a job do perform well while those who do not cannot. Even for less sophisticated jobs, such as truck driver or machine operator, a considerable amount of job knowledge is required, while more complex jobs require even more job knowledge. The simple model of job performance is: GMA determines job knowledge and job knowledge in turn determines job performance (Hunter, 1986; Hunter and Schmidt, 1996). However, there is also a causal path directly from GMA to job performance, independent of job knowledge. This means that when two workers have equal job knowledge, the more intelligent one performs better. The reason for this is that there are problems that come up during the job that are not covered by previous job knowledge and General Mental Ability is directly used on the job to solve such problems (Schmidt, 2009).

(18)

18

General Mental Ability is the summary of all human mental abilities. It includes special aptitudes such as verbal ability, quantitative ability, and spatial ability. All these tests are positively correlated with each other. That is called the “positive manifold”. This means that these special aptitude tests measure General Mental Ability as well. Therefore, they predict job performance too. However, these test scores are worse predictors than GMA. Specific aptitude tests feature a general component, often referred to as ‘g’, which is the reason that they function as job performance predictors (Hunter, Schmidt, and Le, 2006).

To obtain an overall score that represents a measure of General Mental Ability, some General Mental Ability tests sum up the correct answers to all the items. The resulting scores only represent the measures of the specific mental abilities if each is computed for each specific type of ability (e.g., numeric, verbal, reasoning). Most often cognitive tests are standardized, they contain items that are reliably scored, and can be administered to large groups of people at the same time. GMA tests contain items such as multiple choice questions, sentence completion, short answers, or true-false questions (U.S Office of Personnel Management, 2008). A large number of professionally developed General Mental Ability tests are available commercially and can be considered when there is no meaningful need to develop a test that refers specifically to the particular job or organization (U.S Office of Personnel Management, 2008). GMA tests are not domain dependent unlike specific ability tests. Therefore, they are valid predictors of job performance regardless of job type (Viswesvaran and Ones, 2002).

(19)

19

2.3.1 Evaluating GMA tests 2.3.1.1 Validity

GMA tests have a remarkably high predictive validity. According to the results of the meta-analysis of Schmidt and Hunter (1998) when job performance was measured using supervisory ratings, the correlation (predictive/criterion-related validity) with General Mental Ability measures is .51 (where the maximum possible value is 1.00, which represents perfect prediction). Further detailed, the predictive validity of GMA is .51 for medium complexity jobs, for more complex jobs, such as professional and managerial jobs this value is even higher: .56, while for semi-skilled jobs this value is lower: .40.

General Mental Ability tests are not job specific. Therefore, they are not related to the job and have no content validity.

2.3.1.2 Fairness

General Mental Ability tests have low face validity since they are not specifically tailored to the job. Testing for only GMA also ignores the experience of applicants. Another concern about GMA is that it produces racial differences that are 3 to 5 times that produced by interviews, biodata, and work sample tests. An additional potential reason for GMA’s limited support among the greater public is that people may be convinced that General Mental Ability is not the only determinant or even the most important determinant of intelligent behavior. The validity of GMA increases with increasing job complexity. however organizations are less likely to use GMA test for selection for high-level jobs as opposed to low-level ones (Viswesvaran & Ones, 2002).

(20)

20

2.3.1.3 Acceptability

GMA tests have yield a high return on investment if the organization needs applicants who possess particular cognitive abilities or have a high potential to acquire job knowledge or benefit from training (U.S Office of Personnel Management, 2008).

2.3.1.4 Cost effectiveness

One of the main reasons that organizations use General Mental Ability tests for personnel assessment is that it is a cost-effective screening mechanism (Moustafa and Miller, 2003). Developing a GMA test is time-consuming and requires highly skilled personnel, but if face validity is not an issue, organizations can choose to purchase such tests off-the-shelf. The validity of commercially available tests are similarly high to self-developed ones (U.S Office of Personnel Management, 2008).

2.3.1.5 Easiness to use

GMA tests are easy to use. They can be administered via paper and pencil or electronically. Unlike many other selection methods such as structured interviews or work sample tests, GMA tests can be administered to multiple applicants at the same time, therefore it requires less personnel to be present during the assessment. Another advantage of GMA tests against other selection methods such as interviews, work sample tests etc., is that if the correction key is available there is no need of highly skilled organizational psychologist to determine the scores and it also requires less time (U.S Office of Personnel Management, 2008).

(21)

21

2.4 Knowledge economy

But if GMA is a good predictor of job performance because it predicts job knowledge, why not focus on job knowledge instead? To get a deeper understanding of why job knowledge is so important, we have to understand the importance of knowledge in the economy first.

The knowledge economy is the latest stage of global economic restructuring. So far, the economy has transformed from an agricultural economy to an industrial economy to a post-industrial/mass production economy to a knowledge economy. The current stage is characterized by the ascension of technological innovation and the globally competitive need for innovation for new products and processes which are developed by the research community (Smith, 2002).

Amidon, Formica and Mercier-Laurent (2005) defines the knowledge economy as the use of knowledge to generate tangible and intangible values. Technology, especially knowledge-based technology helps to transfer a portion of human knowledge into machines. Economic value can be generated in various fields by using this knowledge in decision support systems.

Technology is not an outcome of the knowledge economy, but one of its main drivers. Nowadays efficient production relies increasingly on information and know-how, and in developed countries the majority of the workers are knowledge, workers. Indeed, many factory workers rely on using their head more than their hands (Geisler and Wickramasinghe, 2015). The appearance of new media, especially the internet has increased the production and distribution of knowledge. Networked databases which promote online interaction between

(22)

22

users and producers of knowledge make access to existing knowledge much easier (Brinkley, 2006).

The knowledge and skills embodied in people are the key aspects of human capital (Venkatraman and Subramaniam, 2002). A 2001 report of the Organisation for Economic Cooperation and Development (OECD) shows that the knowledge-based economy and the exceptional expansion of the service sector have made human capital central to labour productivity and growth. Therefore individuals, teams, and companies need to develop the necessary skills and knowledge to be able to participate and compete in an economy that is mostly based on knowledge productivity. Human capital is accumulated through schooling, training, and experience and is critical to the production of goods and services, and the further development of knowledge (Kessels, 2004). The importance of human capital, as an input has grown as production processes, have become increasingly knowledge intensive. Nowadays the majority of jobs require the processing of information or the application of specialized knowledge and skills for producing increasingly sophisticated goods and services (Stewart and Tansley, 2002). This is also the case with the production of the applied knowledge that underlies technical progress. It has become more and more reliant on explicit research and development activities, more and more connected to formal science, and therefore it is increasingly skill-intensive (Kessels, 2004).

2.4.2 The importance of knowledge to organizations

The evergreen question of strategy is how to establish a sustainable competitive advantage. Researchers in the 90s were seeking to extend our understanding of the determinants of competitive advantage in dynamically-competitive market environments by

(23)

23

analyzing the role of knowledge in organizations. A popular view (Grant, 1996; 1997) of this time was the Knowledge-Based View (KBV), which goes beyond the insights provided by the Resource-Based View (RBV) (Barney, 1986; 1991) and the related dynamic capabilities approach (Teece, Pisano and Shuen, 1997). The Knowledge-Based View is a special case of the resource-based view since it conceptualizes knowledge as a resource (Eisenhardt & Santos, 2002). The main idea of the KBV is that the primary role of the firm, and the essence of organizational capability, is the integration of knowledge (Ng, 2002).

According to Barney’s (1986) RBV, a sustainable competitive advantage requires resources that are Valuable, Rare, Inimitable and Non-substitutable (later augmented with Organized). According to Grant (1996), these so-called VRIN (or VRIO) criteria, point to knowledge, as the strategically most important resource that firms possess. In his paper, Grant makes two assumptions about success in dynamically-competitive market environments. First, in dynamic competition, superior profitability is more likely to result from resource and capability-based advantages than from positioning advantages which derive from the competitive positions based upon some form of generic strategy. Second, resource and capability-based advantages probably derive from superior access to and integration of specialized knowledge. The theory of organizational capability is also based on the assumption that, in essence, organizational capability amounts to the integration of individuals’ specialized knowledge. Knowledge is a critical input to all production processes. Therefore, it is required to be created and stored by individuals in a specialized form. When production requires the application of many types of specialized knowledge, then the primary role of the firm is the integration of knowledge. The essence of organizational capability, as a firm’s ability to perform

(24)

24

repeatedly a productive task which relates to a firm’s capacity for creating value, is the integration of specialist knowledge to perform a discrete productive task (Grant, 1996).

2.5 Job knowledge

Since the sustained competitiveness of the firm derives from the integration of the specialized knowledge of its members, it is in the firm’s best interest to employ individuals who possess this specialized knowledge. Dye et al. (1993) define job knowledge as “the [intra

individual] accumulation of facts, principles, concepts and other pieces of information that are considered important in the performance of one's job.” (p. 153)

Hunter (1983) distinguishes two types of job knowledge in the performance of work:

1. The knowledge of technical information that is required to perform the job 2. The knowledge of the processes and judgmental criteria that are required to

perform correctly and efficiently on the job

Employees who have greater levels of technical expertise and possess more information about the work processes required for efficient operation are considered to have greater levels of job knowledge (Dye et al., 1993).

2.5.1 Job knowledge test

Job knowledge tests, sometimes called as achievement or mastery tests, usually consist of questions designed to assess technical or professional expertise in specific knowledge areas. A job knowledge test evaluates what the applicant knows at the time of taking the test. Contrary to General Mental Ability tests, job knowledge tests make no attempt to assess the applicant’s learning potential. In other words, job knowledge tests can be used to inform

(25)

25

employers what an applicant currently knows, but not about the potential of the individual to master new material in a timely manner (U.S Office of Personnel Management, 2008).

Job knowledge tests are appropriate in situations when applicants have to possess the knowledge required for the job prior to being hired, and not when all the critical knowledge areas needed for the job will be trained after selection. They are especially useful for jobs that require specialized or technical knowledge that can only be acquired over an extended period of time. Some good examples of job knowledge tests are: tests of basic accounting principles for accounting jobs, computer programming for programmer jobs, financial management for financial jobs, and knowledge of contract law for lawyer positions. Job knowledge tests are typically constructed on the basis of an analysis of the tasks that make up the job. The most widely used format of job knowledge tests is the multiple choice question format, but some use other formats, such as written essays and fill-in-the-blank questions (Schmidt and Hunter, 1998).

Constructing job knowledge tests is usually more time consuming and expensive than developing some other widely used selection methods, for example, typical structured interview protocols (Schmidt & Hunter, 1998). However, if the organization does not possess the resources to construct such job knowledge tests on its own, job knowledge tests can also be purchased commercially. For example, off-the-shelf tests are available to measure the job knowledge required of machinists (knowledge of metal cutting tools and procedures) (Dye et al., 1993). Job knowledge tests need to be tailored to the actual job and need to be kept up-to-date. Dye et al. (1993) show that for predicting job performance, job-specific tests are always

(26)

26

superior to off-the-shelf tests. That is, the validity of job-specific tests are almost twice as high as off-the-shelf tests.

Licensing exams, agency certification, and professional certification programs are also considered to be job knowledge tests. Licensure and certification are both types of credentialing—processes that grant a designation that indicates competence in a subject or area. Licensure is more restrictive than certification and is typically concerned with a mandatory Governmental requirement that is necessary to practice a particular profession or occupation. A passing score on a job knowledge test is usually one of the most important requirements to obtain a professional license. The goal of licensure is to protect the practice and the title. This means that only individuals who hold a license are allowed to practice and use a particular title. For example in the United States, to practice law, a law school graduate has to apply for admission into a state bar association that demands passing the association’s licensure examination. Certification is most often a voluntary process established within a nongovernmental or single Governmental agency in which individuals are recognized for advanced knowledge and skill. Similarly to licensure, certification usually requires a passing score on a job knowledge test (U.S Office of Personnel Management, 2008).

2.5.1.1 Job Knowledge Test Development

Dubois, Shalin, Levi and Borman (1993) present a standardized procedure for the development of job knowledge tests. Job knowledge tests are usually validated using a content validity approach, which entails providing the evidence for the degree of overlap between the contents of the test and the job. In practice, content validity is established by having a panel of experts determine the relevance, representativeness, and fairness of test content with respect

(27)

27

to successful job performance. The methods used to develop content valid tests are known as content-oriented approaches of test design (Dubois et al., 1993). The job knowledge development process of Dubois et al. (1993) typically consist of four major tasks: specification of the task domain; specification of the knowledge domain; development of the test content; and implementation of test scoring and validation.

Step 1: Definition of the task domain

The development of the test begins with a systematic description of job tasks and behaviors. Questionnaire-based ratings of time spent and importance of tasks performed on the job are the typical methods for defining the tasks associated with a job. Within-job differences in tasks are defined by clustering these ratings into somewhat independent dimensions of job performance. The relative importance of these dimensions serves as a basis for representatively selecting and weighting test items.

Step2: Definition of the job knowledge domain

Defining the knowledge associated with the job happens in three step:

1. Based on the task list created in the previous step, a comprehensive list of job knowledge elements and categories are generated through job observation, content analyses of documents and training materials, interviews with job experts, etc.

2. The knowledge elements and categories are structured into a questionnaire for job incumbents, whose role is to rate each knowledge element for frequency and importance of use on the job.

(28)

28

3. After analyzing the results, knowledge elements that meet a minimum criticality threshold (combining frequency and importance) are then selected to be included in the test.

Step3: Test content and format

By demonstrating that the test representatively and fairly samples the job content-oriented test development procedures establish test validity (Guion, 1979). Guion (1979) differentiates between the test-content universe and the test-content domain. The test content universe contains all the tasks, test conditions and measuring procedures that might be inferred from the job knowledge domain defined by the procedures discussed in the previous paragraphs. The test-content domain contains the actual specifications for the test content, methods and format, depending upon the purpose and conditions for testing. Identifying the elements of the test content universe for the test content domain is based on decision rules which are influenced by the purpose of the test. Some example methods are random sampling or representative sampling based on the relative importance of the knowledge categories (Guion, 1979).

Step4: Test scoring and validation

Interpretations of test scores are based on comparisons of the relevance and representativeness of the test items to the knowledge requirements of the job.

Lammlein (1986) states that content validity could be established by showing that:

1. Every item is relevant to at least one task of the job.

(29)

29

3. All the items were rated by job experts as necessary prerequisites of job performances and must be memorized

4. All the items have an appropriate difficulty level for the job

5. The number of items in each category is proportionate to the importance of that knowledge category on the job.

2.5.2 Evaluating job knowledge tests

A major difficulty in evaluating job knowledge tests, in general, is that knowledge is, by definition, specific to particular jobs or job families and job knowledge tests vary much more in quality of construction and reliability than ability tests (Dye et al., 1993). This implies that the general findings of studies (such as: Dye et al., 1993, Schmidt and Hunter, 1998, Dubois et. al, 1993) may not apply for each specific test.

2.5.2.1 Validity

The research of Schmidt and Hunter (1998) found that the predictive validity of job knowledge tests is high—.48 against supervisor ratings of job performance, while the meta-analytic research of Dye et al. (1993) found that the predictive validity of job knowledge tests are .45. Breaking it down, its predictive validity for high complexity jobs is 0.62, while for low and medium complexity jobs the predictive validity is both 0.35. The high correlation of job performance and job knowledge shows that job knowledge tests have high criterion-related validity.

Job knowledge tests also have high content validity. The previous chapter on Job knowledge test development shows that job knowledge tests are usually developed using a

(30)

30

content validity approach, which ensures that the assessment is highly related to the tasks performed on the job.

We can assume that job knowledge tests have high construct validity. The literature provides no direct evidence, but indirect evidences are available. First evidence is the high correlation between job knowledge and job performance. Second is that, job knowledge tests have high convergent validity, which is shown by the high correlation of .54 between job knowledge and GMA and the high correlation of .71 between job knowledge and work samples (Hunter, 1986). (Convergent validity refers to the degree to which two measures of constructs that theoretically should be related, are in fact related (Campbell and Fiske, 1959)). Evidence for job knowledge tests divergent validity is yet unfound, due to the limited researches available. (Discriminant validity tests if concepts or measurements that are supposed to be unrelated are, in fact, unrelated (Campbell and Fiske, 1959))

2.5.2.2 Fairness

Schinkel, Van Dierendock and Anderson (2004) found that applicants usually favor procedures that they perceive to be job-related over apparently unrelated tests. Job knowledge tests have high face validity (applicants perceive it to be very fair) because such tests are typically designed to measure knowledge directly applied to performance of the specific job. Applicants prefer the opportunity to perform, chances they get to demonstrate their knowledge, skills, and abilities and the possibility of exerting control during testing (Smither, Reilly, Millsap, AT&T and Stoffey, 1993).

By testing only GMA rather than job knowledge we miss an important factor, namely a candidates experience. The studies of Hunter (1983) and Hunter and Schmidt (1996) showed

(31)

31

that job knowledge is the major link between ability and job performance and between job experience and job performance. Combining job experience with GMA results in incremental validity in predicting job performance (Schmidt & Hunter 1998). The correlation between job experience and job performance is stronger when experience on the job does not exceed 5 years. Job performance increases linearly with increasing experience on the job up to about 5 years of job experience, then, the curve becomes increasingly horizontal, and additional increases in job experience produce little additional increases in job performance (Schmidt & Hunter 1998). This points to another advantage of job knowledge over GMA, which is that job knowledge can developed, and is not largely predetermined at birth.

Job knowledge tests are typically less likely to differ in predictive validity results by gender and race than other types of tests such as General Mental Ability tests (www.siop.org).

2.5.2.3 Acceptability

Job knowledge tests may be expected to provide a high return on investment when the organization needs applicants who possess technical expertise in specific job knowledge areas. If the job knowledge test contributes little to the prediction of job performance above and beyond inexpensive and readily available General Mental Ability tests, then their utility is lower (U.S Office of Personnel Management, 2008).

Job knowledge tests can reduce business costs by identifying individuals for hiring, promotion, or training who possess the needed skills and abilities. These tests can provide useful feedback to test takers regarding needed training and development. By identifying what an applicant knows and what he or she does not know, job knowledge tests are promising tools for the design of personalized training. By knowing in which knowledge areas the person needs

(32)

32

training and in which (s)he does not, the cost and time of the training can be greatly reduced. However this only works when applicants have prior job knowledge, it is inappropriate selection method for jobs where knowledge may be obtained via a short training period (www.siop.org).

An advantage of job knowledge tests is that, unlike GMA tests, they are job specific. Hence they can meet Barney’s (1986) VRIN criteria, namely to be Valuable, Rare, Inimitable, and Non-substitutable, and, therefore, can be a source of sustainable competitive advantage. An off-the-shelf GMA test is easily accessible to any organization therefore it is neither rare nor inimitable. Organizations who use GMA tests will identify the same applicants as good ones, regardless of the job, while a job knowledge test meanwhile is tailored for the specific job, therefore it identifies applicants who will likely be performing well on that specific job.

2.5.2.4 Cost-effectiveness

A major disadvantage of job knowledge tests is that since they are job specific, they are typically expensive and time-consuming to develop. As it was detailed in the Job knowledge test development chapter, developing such tests consist of multiple steps which require numerous work hours of organizational psychologists and job experts. Since the tasks of a specific job are often changing, frequent updates to the test content and validation may be needed to keep up with changes in the job. The organization can choose to instead of developing the test itself rather purchase a job knowledge test off-the-shelf. Although those tests are typically less expensive, their validity is almost half of the self-developed job-specific tests (Dye et al., 1993)

(33)

33

2.5.2.5 Easiness to use

Similarly to GMA tests, job knowledge tests are easy to use. They can be administered using paper and pencil or electronically. Unlike many other selection methods such as structured interviews or work sample tests, job knowledge tests can be administered to multiple applicants at the same time, therefore it requires less personnel to be present during the assessment. Another advantage of job knowledge tests against those selection methods that if the correction key is available there is no need for a highly skilled organizational psychologist to determine the scores and it also requires less time (U.S Office of Personnel Management, 2008).

2.5.2.6 Fakeability

The research of Nguyen, Biderman and McDaniel (2005) highlights another advantage of job knowledge against some other selection methods, namely that job knowledge tests are not fakeable. When respondents are asked to respond using a knowledge response format, applicants respond to the best of their ability, regardless of the pressure to fake. While on other types of personnel assessment such as interviews and personality tests applicants can fake in order to improve their chances of being selected.

2.6 Combining GMA and job knowledge tests

The previous chapters showed that both GMA tests and job knowledge tests have several advantages and disadvantages too, but what happens if we combine the two methods?

(34)

34

When multiple selection tools are used, the combined validity of the tools can be considered. Each tool will provide unique information about the applicants’ ability to perform the job, to the extent the assessment tools measure different job-related factors. The tools can predict the applicants’ job performance more accurately when used together as opposed to either tool being used alone. The tool’s incremental validity is the amount of predictive validity one tool adds relative to another. Incremental validity of an assessment is an important attribute, even when one has a test with high validity, a test with incremental validity still explains unique variance of job performance above and beyond the original test (Schmidt and Hunter, 1998).

Schmidt and Hunter (1998) found that job knowledge tests increase the validity by .07 above and beyond that of GMA measures alone, yielding a 14% increase in validity and utility. Therefore combining GMA tests with job knowledge tests can have significant practical value to organizations.

2.7 The OntoHR project

The findings about job knowledge summarized in the previous chapters served as a basis for the OntoHR project. The four participating partners of the OntoHR project with different backgrounds, namely knowledge management, information technology, and human resource management, funded by the European Commission’s Lifelong Learning Programme aimed to bridge the terminologies between these different fields and create a powerful tool for tackling challenges of selection and recruitment (Kismihók and Mol, 2008). Although there has not been much research done on job knowledge tests lately, their approach utilizes job knowledge as a

(35)

35

predictor of job performance. The objective was to develop an ontology-based personnel decision system which can help mapping qualifications in vocational education to current and valid job roles, testing and evaluating students on the basis of valid, labour market driven competencies, identifying missing competencies and provide learning content needed to acquire them and addressing the weaknesses of particular VET curricula, and thereby provide ad-hoc support (www.ontohr.eu).

The outcome of the project was the OntoHR system, which provides detailed personalized assessment and training of essential technical skills and related job knowledge elements that are required for the ICT system analyst position. The main component of the system is a customizable, adaptive job knowledge test and a General Mental Ability test (www.ontohr.eu).

2.8 The Med-Assess project

After the OntoHR project, the development of the system continued in the context of the European Commission’s Leonardo Da Vinci Innovation Transfer Call The second project, the Adaptive Medical Profession Assessor, in short Med-Assess aimed to transfer the OntoHR system from the ICT to the health domain. The Med-Assess test provides a solution for nurses to assess their knowledge and skills, and to identify in which areas they need further training. Additionally it is a tool for hospitals and medical institutions for recruitment and to test the knowledge and skills of current employees (medassess.eu). Although the OntoHR test contains a GMA test, it is missing from the Med-Assess test.

(36)

36

While the test development process of the OntoHR project was the previously presented method of Dubois et al. (1993), in the Med-Assess project, with the help of information technology the first step of the process, the definition of the task domain had been modernized. While the previous method uses questionnaires for job holders to define the tasks associated with the job, the OntoHR approach proposes to use international occupation databases (such as O-Net, ESCO, DISCO) and text mining of job vacancies for this purpose, then validate the results by subject matter experts (www.med-assess.eu).

Kobayashi et al. (2015) state, that using text mining instead of more traditional methods of job analysis is less expensive, less time-consuming and more reliable. They propose to use job vacancies posted on the internet since that could provide detailed and rich data to inform job analysis using text mining. With the techniques presented in this research, it is possible to generate a list of tasks for any kind of job or to validate an existing task list. The aim of using the improved task domain definition is to create a job knowledge test that covers the required job knowledge more thoroughly. Therefore, it has an increased content and predictive validity.

Although the studies that the idea of OntoHR and Med-Assess are based on shows that they are a promising assessment methods, the tests themselves had not yet been validated, therefore their superiority to other selection methods are yet unknown. This research focuses on the Med-Assess test, since in that project there had been more emphasis on the development of test content, while the OntoHR project rather focused on developing the test software itself. The main aim of this research is to investigate is the Med-Assess test’s usefulness as a selection tool.

(37)

37

2.9 Hypothesis development

Based on the literature review we can make several arguments for and against both GMA tests and job knowledge tests. The researches of Hunter (1986) Hunter and Schmidt (1996) and Schmidt (2009) show that GMA is a good predictor because it predicts job knowledge. One can assume that job knowledge is the stronger predictor because of its proximity to job performance. As seen, predictive validity is the most important characteristic of a personnel assessment, yet there is a little research available on which of these selection methods have a higher predictive validity. There are only two researches that compare the two are the research of Schmidt and Hunter (1998) and Dye et al. (1993). The Schmidt and Hunter (1998) research shows that GMA tests predictive validity is slightly superior. (.51 against .48), while the research of Dye et al. (1993) concludes that the validity of Specific JK tests rivals that of GMA. However these researches are not very recent and while GMA testing hasn’t changed, there have been developments in the methods of job knowledge development (as it was shown in the case of the Med-Assess project) which could increase the predictive validity of such tests. Based on these arguments, the first hypothesis is:

Hypothesis 1: Job knowledge tests has higher predictive validity than GMA tests.

An important argument for using job knowledge tests instead of GMA tests was that applicants perceive job knowledge tests be fairer than GMA tests, because applicants favor assessments that they perceive to be job-related over apparently unrelated tests (Schinkel et al., 2004). They prefer the opportunity to demonstrate their knowledge, skills, and abilities and the possibility of exerting control during testing (Smither et al., 1993).

(38)

38

Hypothesis 2: Applicants perceive the job knowledge test to be more face valid than the GMA

test.

As a theoretical contribution, the study further investigates the relation of General Mental Ability, job knowledge, and job performance. So far, there has been little research examining the mechanisms through which General Mental Ability affects performance. As presented before, it is thought that General Mental Ability affects the acquisition of job knowledge and that job knowledge in turn affects performance (Hunter and Schmidt, 1996). Besides that, there is also a causal path directly from GMA to job performance, independent of job knowledge (Schmidt, 2009). These findings are the basis of the third hypothesis.

Hypothesis 3: Job knowledge mediates the relationship between General Mental Ability and job

performance.

While the OntoHR test features a GMA test, the Med-Assess test does not. GMA tests and job knowledge tests are not measuring the same thing, while job knowledge tests are measuring the amount of job knowledge currently possessed, GMA test measures the ability to acquire job knowledge. Since they are both effective predictors and assess different traits, it is hypothesized that each will explain unique variance in job performance and that therefore combining the test scores will result in better predictive validity. The results of the meta-analysis if Schmidt and Hunter (1998) also support this.

Hypothesis 4: Job knowledge explains unique variance above and beyond GMA and GMA

(39)

39

Figure 1 Research model

3. Methods 3.1 Sample

Since job performance scores were needed, the sample had to consist of nurses who are already working and not of applicants for a position or students. A Hungarian hospital (that wishes to remain anonymous) participated in the research and offered to ask 110 nurses to fill out our tests and questionnaires. From the 110 nurses, 108 fully completed our tests and questionnaires. The nurses were rated by their immediate supervisors, 14 of them participated.

Taking all 108 respondent nurses together (Mage=36.6, SDage=7.9, age-range: 22-57)

93.5% were female. The sample covered a wide range of educational backgrounds. Of the respondents 4.6% only completed high school. The majority of 63.9% completed higher-level vocational training; 28.7% of the respondents possess a bachelor degree; while the remaining 2.8% achieved a master’s degree. 18.9% of the nurses had work experience less than 5 years,

GMA

Job

knowledge

Job

(40)

40

16% between 6 and 10 years, 17.9% between 11 and 15 years, another 17.9% between 16 and 20 years, 8.5% between 21 and 25 years, 14.2% between 26 and 30, while the rest 6.6% have work experience of more than 30 years. Of the respondents, 87% haven’t worked in any other hospitals, for 11.1% of the respondents this is the second hospital they are working in, while 0.9% and 0.9% worked in 2 and 4 other hospital before respectively. 73.1% of the respondents had never been promoted, 24.1% had been promoted once, while the remaining 2.8% twice. 88.9% had never been unemployed for more than a month, 9.3% were once and the remaining 1.9% twice. The distribution of the respondents IQ is very similar to the Hungarian population’s IQ distribution (Templer and Arikawa, 2006). 50% have an IQ less than 100, 14.8%’s IQ is between 100 and 103, 4.6’s is between 104 and 107, 9.3%’s score is between 108 and 112, 13% is between 113 and 118, 3.7% is between 119 and 124, 2.8%’s score is between 125 and 129, while the remaining 1.8%’s IQ is over 135.

3.2 Procedure

The nurses filled the tests and questionnaires in a controlled environment, the local university’s computer room, to avoid any form of cheating. They filled out the Med-Assess job knowledge test, a GMA test and an Employability questionnaire (for another research) successively with a 10-minute break between them. The GMA test used was obtained from a partner organization who also did the correction of the tests.

Since the participants’ first language is Hungarian and speaking any other language is not required in their position, all tests and questionnaires were translated into Hungarian. To assure that the content of the items remains unchanged, the translated Hungarian items were

(41)

41

translated back into English by a third person. The time limit for the Med-Assess test was 90 minutes, since based on our experience that is more than enough for anyone to complete it conveniently, the time limit for the GMA test was 40 minutes, that was set by the creators of the test, while the Employability questionnaire had no time limit, but it did not take more than 15 minutes for anyone. Since the whole process took around 3 hours for a nurse, they received an incentive of 3000 HUF (≈10 EUR) each for their contribution.

The immediate supervisors all received a personalized questionnaire to rate their employees’ performance. A personalized questionnaire contained the same set of questions for all the nurses the supervisor had to rate. The questionnaire links had been sent to them electronically with the instructions, there was no need for a controlled environment since they could have no motivation to cheat. Filling a questionnaire took around 20 minutes on average and supervisors received 400 HUF (≈1.3 EUR) per rated person for filling them.

3.3 Measurement of variables Job knowledge

To collect job knowledge results, test takers completed the Med-Assess job knowledge test. The test consists of 173 multiple choice questions, which assesses every knowledge element required to perform all the nursing tasks. For each multiple choice item, respondents had to choose from 4 possible answers the correct one. Each correct answer is worth 1 point. The Med-Assess implementation of the test is adaptive, which would result in incomparable data across respondents (due to respondents not necessarily answering the same items), so

(42)

42

during the data collection respondents completed the full set of items. An example item is: What is the pH value of healthy blood? (1. 8.55-10.55, 2. 7.35 – 7.45, 3. 4.55 – 6.55, 4. 6.5 – 8.5)

General Mental Ability

To collect GMA results participants filled out the paper-and-pencil version of a GMA test. The test consist of Raven’s Progressive Matrices questions which are recognized as one of the most accurate measurement of General Mental Ability (Mackintosh and Bennett, 2005). Due to the strict regulations of the owners of the test, a correction key could not have obtained, so the correction was done by the partner organization, therefore the data set only contains the final test scores and the participants calculated IQ scores. The final tests score is the number of correct answers while the calculated IQ score how intelligent is that person is compared to the population (Fitzgerald, Grey and Snowden, 2007). The latter is only usable to describe the sample while the raw scores can be statistically analyzed.

Job performance

Since there is no job performance data currently available at the participating institute, it had to be collected as well. Supervisory ratings are widely used as job performance measures and can be collected at any organization (Viswesvaran, Ones and Schmidt, 1996). Supervisory ratings were provided by the head nurses (immediate supervisors). They filled out a questionnaire about their employees’ overall performance, in-role performance, and task performance. The 9 overall performance questions are from the research of Ashford and Black

(43)

43

(1996), the 7 in-role performance questions are from Williams and Anderson (1991), while the 49 task performance questions are based on the task list generated by the Med-Assess project.

Example items: Overall performance: Overall, how do you see this employee in terms of their work performance? The measurement was conducted by using a five point Likert-scale ranging from (1-Unsatisfactory) to (5- Outstanding) In-role performance: Adequately completes assigned duties. Measured with a Likert-scale from: (1-Strongly disagree) to (5 – Strongly Agree) Task performance: Please rate how the employee performs the following task: Wound care. Measured with a Likert scale from (1-Very badly) to (5-Excellently)

Fairness

The Med-Assess and GMA test’s fairness were measured right after the completion of each test. Fairness was measured by 4 questions of Bauer, Truxillo, Sanchez, Craig, Ferrara and Campion (2001) and Sylva and Mol (2009). The constructs are measured with 4 items, 2 about job relatedness and 2 about process fairness. The same set of questions was asked about both tests, which resulted in 2 constructs: Fairness of GMA and Fairness of Job Knowledge.

Example item: Doing well on this test means a person can do the nurse job well. Measured with a Likert scale from (1=Strongly disagree) to (5=Strongly Agree)

Control variables

Results of the current study are controlled for five control variables. Turban and Jones (1988) showed that demographic variables potentially could account for variance in

(44)

job-44

performance ratings. Therefore, Age and Gender were added as control variables. Personal

liking was measured since the research of Lefkowits (2000) shows that it may impact

supervisory ratings, supervisors tend to rate employees’ Job Performance higher if they like them. Who the Supervisor of the employee is was controlled for to resolve possible rater effects. Department was added, since the job of the nurses can slightly differ based on the departments they are working in, which may affect the predictive validity of GMA or Job Knowledge (Alpander, 1990). These items were included in the questionnaire that measured the fairness of the GMA test. All control variables were measured by 1 item, except for Personal Liking, which was measured with the 2 questions of Liden, Wayne and Stiwell (1992).

An example item for personal liking: I think my subordinate would make a good friend. Measured with a Likert scale from (1=Strongly disagree) to (5=Strongly Agree)

3.4 Analytical strategy

As a first step, frequencies were checked to examine if there were any errors in the data set. The amount of missing data was < 10% for all variables.

3.4.1 Recoding

The next step was recoding. The Job Knowledge results are numbers between 1 and 4, which marked which answer of the four possibilities they gave. To obtain meaningful results, wrong answers were recoded to 0 and correct ones to 1. The correct answers to the GMA test were only known by the partner organization, they performed the correction of the tests, and, therefore, no recoding was needed there from our side. The supervisory ratings contained

(45)

45

numerous counter indicative items, items that are phrased so that an agreement with the item represents a low level of the construct being measured were recoded.

3.4.2 Missing values

After recoding, missing values were dealt with. In both GMA and Job Knowledge tests, a missing answer represents a wrong answer, so missing values were replaced by 0-s. The questions of the supervisor’s questionnaire, that had been used to calculate Job Performance score, was set to “Force response” in the questionnaire software, which forced respondents to answer these questions, therefore no data was missing there. Only 1 respondent’s Age, and another respondent’s Fariness of Job Knowledge variables were missing from the whole data set, these had been substituted by the mean of the variable.

3.4.3 Reliability

Reliability checks had been run for all multi-item variables, Fairness of GMA, Fairness of Job Knowledge, Personal liking, Job Knowledge and Job Performance. The reliability of the General Mental Ability had been obtained from the owner of the test. Cronbach’s alpha represents the estimator of the internal consistency. It has been tested to verify if all the items in one scale measure the same or if some of them should not be used for analysis. Job knowledge, Fairness of Job Knowledge and Personal Liking have a Cronbach Alphas higher than 0.7, but lower than 0.9 which means the scales is good, while Job performance, GMA and Fairness and Job Performance have Cronbach Alphas higher than 0.9, which indicates very high levels of internal consistency. The exact values can be found in table 2.

(46)

46

3.4.4 Computing Scale Means

As the final preliminary step, new variables as a function of existing variables had been created to test the hypotheses. The means of all items were calculated arrive at a score on the construct. Means and standard deviations were calculated with SPSS for all these new variables. These values can be found in table 1.

3.4.5 Skewness and Kurtosis

After the descriptive statistics, skewness, kurtosis and normality tests had been performed. Table 1 contains skewness and kurtosis values. The table shows that all five variables have close to ideal skewness, which means the values are between -0.5 and 0.5. Therefore, no normalization technique is required. GMA and Job Knowledge have close to ideal kurtosis, while the kurtosis of Job Performance, Fairness of Job Knowledge and Fairness of GMA is moderately low, that indicates that the distribution is flatter than the normal. However, there were no techniques needed to normalize the distribution, since the risks of both skewness and kurtosis can be reduced by having a large sample, which is the case for this research. (N=108)

Mean Std. Deviation Skewness Kurtosis

General Mental Ability 16.53 5.55 0.38 -0.33

Job Knowledge 109.75 13.25 -0.47 0.14

Job Performance 4.11 0.48 -0.05 -0.68

Fairness Job Knowledge 2.81 0.88 0.45 -0.54

Fairness GMA 2.63 1.04 0.16 -0.66

(47)

47

Variables Mean

Std.

Deviation 1 2 3 4 5 6 7 8 9 10 1 Gender (0=female, 1=male) 0.06 0.25

2 Age (yrs) 36.67 7.95 -.199*

3 Department 12.17 7.35 .051 -.103

4 Supervisor 8.44 4.34 .045 .127 .081

5 Personal liking 3.51 0.71 .101 .076 .117 -.079 (.819)

6 General Mental Ability 109.75 13.25 .025 .082 .010 -.008 -.170 (.902)

7 Job Knowledge 16.53 5.55 .029 -.125 -.046 -.069 .000 .248** (.846)

8 Job Performance 4.11 0.48 .010 .168 .004 .173 .598** .006 .012 (.987)

9 Fairness GMA 2.81 0.88 .100 -.135 .152 .173 -.044 .021 -.038 -.121 (.939)

10 Fairness Job Knowledge 2.63 1.04 .213* -.108 .111 .176 -.002 -.209* -.050 -.052 .534** (.878)

Note: N = 108. Reliabilities are reported along the diagonal **. Correlation is significant at the 0.01 level (2-tailed). *. Correlation is significant at the 0.05 level (2-tailed).

Referenties

GERELATEERDE DOCUMENTEN

Kennis over het aanleggen van een biotoop kwam in het beste geval neer op het gebruik van de juiste grondsoort, maar met de onderliggende grondopbouw werd nog

Hagen of mijten van snoeiafval, al dan niet doorgroeid met (klim-)planten bevorderen een goed microklimaat met een grote diversiteit aan insekten en

The aim of the study is to describe the relationship between emotional work, job stress and compassion fatigue among professional, enrolled and auxiliary nurses working

This places the individuals in the minority gender in a “position of dyadic power, from which they can maximize their rewards while paying only limited costs” (Regnerus,

Voor nu is het besef belangrijk dat straatvoetballers een stijl delen en dat de beheersing van de kenmerken van deze stijl zijn esthetiek, bestaande uit skills en daarnaast

46 Naar mijn idee komt dit omdat de zwangerschap en bevalling grotendeels door het medische systeem in banen wordt geleid, en is er na de geboorte van het kind meer ruimte

4H2’s social sciences teacher (who was also 4H1’s social studies teacher) never referred to pupils by ethnic category, but he was very strict about the use of

Niet alleen modieuze tesettür wordt gepromoot, ook niet-islamitische mode komt veel voor in advertenties voor gesluierde vrouwen, zoals bijvoorbeeld in Âlâ.. In dit tijdschrift