• No results found

How rational thinking, intuition and heuristics form strategic decision-making ENTREPRENEURIAL RISK-TAKING PROPENSITY:

N/A
N/A
Protected

Academic year: 2021

Share "How rational thinking, intuition and heuristics form strategic decision-making ENTREPRENEURIAL RISK-TAKING PROPENSITY:"

Copied!
37
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

ENTREPRENEURIAL RISK-TAKING PROPENSITY:

How rational thinking, intuition and heuristics form strategic decision-making

Hilda Dijkstra September, 2011

Master's Thesis, BA Business Development

University of Groningen, Faculty of Economics and Business First supervisor: Dr. J.D. van der Bij

(2)

ABSTRACT

This study investigates the entrepreneur’s propensity to take risks. Risk-taking propensity, the willingness to take or avoid risks, stands behind each strategic decision and eventually determines firm performance. It is therefore crucial to understand what factors influence such willingness to take risks. We model entrepreneurial risk-taking by investigating how two heuristics and seven of their resulting cognitive biases mediate the relationship between the intuitive and rational thinking system of entrepreneurs, and their general risk-taking propensity. Moreover, we investigated if cognitive biases really do ensue from heuristics as has been theorized. Our hypotheses were tested among 289 American entrepreneurs. We found that entrepreneurs are more likely to make use of heuristics and biases when they use their intuitive system. In contrast, when entrepreneurs use their rational system, they are less likely to use heuristics and biases. When entrepreneurs apply heuristics and biases in their decision-making processes, their risk-taking propensities increase. Moreover, we found that the biases ‘hindsight bias’, ‘illusory correlation’ and ‘overconfidence’ result from the availability heuristic and that ‘illusion of control’, ‘law of small numbers’ and ‘regression fallacy’ result from the representativeness heuristic.

INTRODUCTION

Entrepreneurs are the devoted and persistent initiators of creativity and innovation (Shane, 1994). Although a lot of people see opportunities, only the entrepreneur actually carries them out. Entrepreneurs are pioneers. They have the passion to create something great. The successful ones are perseverant and passionate about what they do, and have a strong resilience to deal with loses and high pressures (Ma and Tan, 2003). An entrepreneur can be any individual who has the ownership of a new innovation, product, enterprise or idea and is therefore responsible for the inherent losses and gains of his activities (Sullivan and Sheffrin, 2003). Since the entrepreneur is the chief of his organization, he has the responsibility for the strategy of his business in order to win the competitive game (McGee, 2005). The way that an entrepreneur perceives and takes risks is, therefore, crucial for the success of his business. For both science and practice, it therefore is interesting to understand what determines entrepreneurs’ willingness to take risks. Especially in the light of strategic decisions, since these decisions have significant effects on the activities of the entrepreneur, his organization, and it’s stakeholders (Borison and Hamm, 2010). Such decisions are generally non-routine, and they usually necessitate important resources, or set important standards (Narayanan and Fahey, 1982; Mintzberg, 1978). Strategic decisions are those that obligate entrepreneurs to actions that will have important consequences for the long-term performance of their business (Borison and Hamm, 2010). What dominates strategic decision making is the uncertainty regarding a future situation and thus the major risks inherent to these kind of decisions. Aven and Renn (2009) define risk as the uncertain probability that a particular human decision and the resulting (in)action or (in)activity might result in an undesirable outcome. While assessing risks and its uncertain outcomes is part of entrepreneurial life, it is a tough job. Risks have to be judged in situations of high uncertainty and complexity, using limited or incomplete information available (Wickman, 2003). For entrepreneurs especially, it is therefore difficult to process, analyze and remember all relevant information in order to make a correct judgment (Busenitz and Barney, 1997).

Within the strategic and entrepreneurial literature stream a lot of research has been done in the area of entrepreneurial risk-taking decision-making. Most answers are provided by the ‘heuristics and bias’ scholars, who propose that entrepreneurs rely on their intuition when making strategic decisions. The reliance on

(3)

intuition simplifies decisions through the classification of decisions by heuristics into already existing mental schemes (Kahneman, 2003; Kahneman and Tvesky, 1974). Accordingly, people change the target attribute of their judgment by a related heuristic attribute, which comes more readily to mind (Kahneman, 2003; Kahneman and Tvesky, 1974). The result may be an error in judgment or ‘cognitive bias,’ which may lead to misinterpretations and inaccurate judgments (Kahneman 2003; Kahneman and Frederick 2002; Bazerman, 1998; Kahneman and Tversky, 1972). Heuristics and biases could be helpful in specific situations that require quick responses and fast decision-making (Kahneman, 2003; Kahneman and Tvesky, 1974; 1973). However, heuristics and biases may also result in less comprehensive decisions, since the resulting errors break the laws of probability systematically (Gilovich et al., 2002). Although a lot of answers have been provided within the entrepreneurial and strategic literature streams, some gaps can be identified.

First, the scholars of the heuristic-and-bias approach have based their findings mainly on experimental research in laboratory settings. These experimental studies demonstrated that a significant part of irrational decision-making might be explained by several heuristics and biases (Eder et al., 2011; Hobbs et al., 2010; Kliger and Kudryavtsev, 2010; Biais and Weber, 2009; Schwarz et al., 2002; Marsh and Hau, 2002; Rabin, 2002; Kahneman and Frederick, 2002; Brenner et al., 1996 and Bar-Hillel, 1980). Yet there is an ongoing debate among scholars as to whether the experimental designs overemphasize the occurrence of heuristics and biases, since these studies fail to motivate a subject to behave in natural ways (Simon and Houghton 2003; Schwarz, 1994). In addition, experimental settings may offer unnatural signals that would not occur in real-life situations (Schwarz, 1994). Others, on the other hand, have advocated that laboratory designs provide subjects with a more natural environment, since they are forced to make decisions without experience as they are situated in uncertain and unknown circumstances. Here, heuristics and biases are more likely to occur (Duhaime and Schwenk, 1985). Nevertheless, the field studies that have been executed, have examined one or two specific heuristics or biases, focused mostly on particular decision areas such as product introductions, investment decisions or market entry (Sjöberg and Engelberg, 2010; Forbes 2005; Simon and Houghton, 2003; Camerer and Lovallo, 1999; Moore et al., 1999, Busenitz and Barney, 1997; Jones 1995; Barnes 1984). The limited number of field studies to a larger set of heuristics and biases has shown interesting results, though. Simon and Houghton (2000), for example, showed that ´illusion of control´ and ´belief in small numbers´ increase entrepreneurs´ decisions to start a venture, and that risk perception mediates this relationship. Furthermore, the research of Keh, Foo, and Lim (2002) examined that the 'illusion of control´ and ´belief in the law of small numbers´ are related to the positive evaluation of entrepreneurial opportunities. Risk perception was found to mediate this relationship too.

While these field studies, together with the increased number of experimental studies, have emphasized the significance of heuristics and biases in decision making, experimental data give only partial evidence as to whether heuristics and biases play a role in actual entrepreneurial willingness to take risks. In order to contribute to the completeness of the heuristics-and-bias literature stream, it is therefore necessary to gain more evidence from the field, whereby the effects are not restricted to specific decision areas. Hence, we will make use of the largest set of cognitive biases examined simultaneously by adding four more cognitive biases to the list of biases that have currently been examined in the entrepreneurship setting. Moreover, this study will be the first field study to explore whether cognitive biases emanate from the two most-known heuristics: availability and representativeness. Since this has been theorized for almost a decade, it is necessary to provide field evidence for this relationship.

Secondly, as can be seen, the mentioned field studies have focused on entrepreneurial risk perception because several researchers have exemplified that human perceptions of risky situations might explain the degree of risk behavior (Simon et al., 2000; Weber and Hsee, 1998; Sitkin and Weingart, 1995). However, research to

(4)

people’s tendencies or propensities would provide more value to the scientific field. While risk perception entails one’s observation about a particular probability or risky situation, it gives little information as to whether or not someone has the actual willingness to take a risk (Trahms et al., 2010). A concept that describes the degree of someone’s current tendency to take risks, is risk-taking propensity (Trahms et al., 2010). Risk-taking propensity may determine actual risk-Risk-taking behavior. Nevertheless, since a number of factors can hinder risky behavior (such as a missed appointment, unforeseen sickness, or other obstacles), risky behavior and taking propensity are not the same. Yet unfortunately, field research focused on entrepreneurial risk-taking propensity, whereby heuristics and biases mediate this relationship, has not been documented. It is, therefore, essential to gain field evidence about the actual willingness of entrepreneurs to take risks, instead of expanding current evidence regarding risk-taking perception. While risk perception teaches us a great deal about entrepreneurial observations and interpretation of situations, the propensity to act on these precepts will yield more knowledge about the motivators and predictors of entrepreneurial actions.

Thirdly, the controllable factors of risk-taking propensity are lacking in the entrepreneurial and strategic literature (Simsek, 2007; Sitkin and Pablo, 1992). While the intuitive system is predominantly responsible for the reliance on heuristics and biases, it would be interesting to know if one could use its rational system deliberately in order to correct these mistakes. However, most research has looked at the advantages and disadvantages of the intuitive system, since entrepreneurs are known for their extensive reliance on intuition (Busenitz and Barney, 1997). Yet it would be interesting to gain more knowledge about the role of the rational system as a determinant or obstacle of risk-taking propensity. This study, therefore, contributes to entrepreneurial and strategic literature by gaining more evidence about what entrepreneurs can do about their risk-taking propensity. Focusing on rational thinking allows us to investigate a potential method to de-bias strategic decisions.

The main purpose of this study is therefore to explain further the general risk-taking propensity of entrepreneurs in the light of strategic decisions. We propose that the use of heuristics and biases can explain entrepreneurs’ degree of risk-taking propensity. We expect that intuitive thinking will drive these heuristics and biases, while rational thinking will weaken their occurrence. Since most research has been done in experimental settings, this study will be based on field research of 289 American entrepreneurs.

ENTREPRENEURIAL DECISION-MAKING: A LITERATURE REVIEW

In general, previous research about entrepreneurial decision-making can be divided into two key literature streams: the trait and the cognition approach (Das and Teng, 1997). The former approach examines all psychological, personal and demographic differences between entrepreneurs and non-entrepreneurs. Accordingly, age, inertia, outcome history, risk preferences and self-efficacy have been identified as possible sources of entrepreneurial risk-taking propensity (Simsek, 2007, Sitkin and Weingart, 1995, Krueger and Dickson, 1994; Sitkin and Pablo, 1992). Also, some personality traits, such as the need for achievement, tolerance for ambiguity, and need for conformity, have been reported as being slight differences between entrepreneurs and non-entrepreneurs (Miner et al. 1989; Begley and Boyd 1987). Unfortunately those differences were considered rather small and hardly systematic (Palich and Bagby, 1995)

In response to this approach, others tried to seek differences between the external causes of entrepreneurial behavior, such as market imperfections and opportunities (Schumpeter, 1934; Schultz, 1975; Kirzner, 1973; Kaish and Gilad, 1991). Unfortunately these models could not clarify the reasons why individuals responded differently to particular opportunities, and it became impossible to put the individual differences approach aside (Busenitz and Barney, 1997).

(5)

Opposed to the search for differences between individuals and their responses to market conditions, the cognitive literature stream tried to find answers in entrepreneurs’ information-processing styles. These scholars focused on the gathering, processing and evaluating of information. The theoretical foundation of this literature stream, which implied that decisions are created within a range of all potential possible solutions, originated from the early studies of Simon (1978; 1957) and Newell and Simon (1972). Simon considered rational decision-making as the process of reviewing and analyzing every option and scenario. However, due to high search efforts and cognitive limitations, humans cannot always perform this task, and therefore most decisions cannot be considered as rational. Simon (1957)’s theory gained more support by supplementary research that showed human limitations in short-term memory and the further clarification of humans’ cognitive limitations (Newell and Simon, 1972; Simon 1978).

Further criticism of this rational model came from scholars who advocated that people process information in two different ways: through a rational and an intuitive system (Epstein et al., 1992; Fiske and Taylor, 1991; Bruner, 1986; Sherman and Corty, 1984; Tversky and Kahneman, 1982; Nisbett and Ross 1980; Epstein 1973). The rational system can be explained as the analytical, planned, calculated and logical way of reasoning, whereby decision-making is based primarily on evidence and logic. In contrast, the intuitive system can be described as the effortless, practical, automatic, intuitive and an holistic mechanism of reasoning, whereby decisions are made based on past experience by the use of mental prototypes or scripts (Epstein et al., 1996). The discovery of systematic deviations between human judgments and normative rules in the mid-1970s created a new research program: the heuristic-and bias-approach. The consequences of the intuitive system can be clarified largely by these scholars. The main focus of this program was to investigate the underlying cognitive mechanisms of the intuitive system (Gilovich et al., 2002; Kahneman et al., 1982; Tversky and Kahneman, 1974). Here, the intuitive system was assumed to be the causal factor of the usage of the representativeness heuristic (Kahneman and Tversky, 1972), the availability heuristic (Tversky and Kahneman, 1973) or the anchoring-and-adjusting heuristic. The first two in particular received much attention and were the center of many research programs. Due to the simplification of the use of heuristics, systematic biases in judgments are formed. The essence of the heuristic theory is that heuristics are used in new, uncertain and unpredictable situations where humans are incapable of applying normative rules that should have been used. Though the intuitive system produces judgments automatically, these judgments are used only when the rational system fails to correct them.

The subsequent research programs on heuristics and biases showed promising results. The findings implied indeed that human decision-makers apply heuristics to simplify their (strategic) choices, especially in complex and uncertain situations (Zajac and Bazerman 1991; Bateman and Zeithaml 1989; Jackson and Dutton 1988; Kahneman et al.. 1982;). Although it is recognized that not every individual is subject to the same degree of these cognitive errors (Busenitz and Barney, 1997; Bazerman and Neale, 1983) the heuristics-and-bias approach appears to be a critical stream in decision-making theory (Tversky and Kahneman, 2003; 1974). Advanced theory gave more information about the use of cognitive schema and mental categories, wherein heuristics direct specific observations and judgments (Sherman; 1992; Evans, 1989, Baron, 1985). Within this literature stream, Dutton and Jackson (1987) developed so-called ‘categorization theory’ to explain human information-processing by a classification process of decisions into mental categories. They also paid attention to the role of heuristics in this process. The heuristic or ‘rule of thumb’ directs the interpreted information into a specific mental category. Through this way, difficult choices can be simplified in order to reduce search and time costs. Gooding (1989) builded further on this theory by developing decision frames with regard to

(6)

perceptions of strengths, weaknesses, opportunities and threats. His research concluded that, when the provided information was vague, people interpret scenarios differently due to their own mental categories (Palich and Bagby, 1995). We will elaborate more on the use of these mental categories, and the role of heuristics and biases in this process while developing the hypothesis of our second study. But before that, we will explain briefly the most important heuristics and biases for our first study.

STUDY 1: HEURISTICS AND THEIR EFECTS ON STRATEGIC DECISION MAKING; COGNITIVE BIASES

The heuristics-and-biases research program has focused mainly on two important heuristics, namely: representativeness and availability. These heuristics direct judgments into already existing mental models or schemes. When an individual assesses a specified target attribute of an object, the original target attribute becomes substituted by another element – the heuristic attribute – which originates from a mental category. This heuristic attribute comes more readily to mind and allows quick decision-making based on preceding experiences (Kahneman, 2003; Kahneman and Tversky, 1974). As can be seen, the definition of heuristic can be used in different ways: as a noun and as an adjective. The noun denotes the process whereby information is processed, and the adjective refers to the ‘heuristic’ attribute that is replaced in a judgment (Kahneman, 2003). In this section we will explain first the particular heuristics and how they might result in biases. We propose the following conceptual model, outlined in Figure 1:

FIGURE 1:

The availability and representativeness heuristics and their corresponding biases Availability heuristic

The availability heuristic implies that people assess the likelihood of an event according to the easiness by which similar events can be memorized (Kahneman 2003; Schwarz, and Vaughn, 2002). This makes new, positive, salient, emotionally-charged or other easily-imagined information easier to remember and thus more important. Subsequently, easily-imagined information is given too much weight and not all relevant information is incorporated in the decision-making process. Keller et al., (2006), Kahneman (2003) and Schwarz and Vaughn (2002) have carried out a series of psychological experiments, and have indeed concluded that people treat easily recalled events or scenarios as being more likely to occur. The errors or the poor judgments that result due to this process are called cognitive biases. There are several biases that result

(7)

from the availability heuristic. In this study we investigate the most familiar and most applied ones, namely: overconfidence, illusory correlation and hindsight bias.

Overconfidence: If individuals’ base their certainty on the ease with which they can recall reasons for confidence, the resulting error named “overconfidence’’ can occur (Russo and Schoemaker, 1992). According to Zacharakis and Shepherd (2001), overconfidence is the failure to know the restrictions of one’s knowledge, which leads to the overestimation of one’s certainty about facts. This bias is likely to appear in highly uncertain situations in particular. (Simon and Houghton, 2003). The most important problems with overconfidence arise when original assumptions are not questioned or revised, not even when new data become available. The assumptions become treated as facts, which make the uncertainty regarding actions disappear (Russo and Schoemaker, 1989).

Illusory correlation: Illusory correlation is the error of seeing a relationship one expects between two events or two objects, when no such relationship exists (Eder et al., 2011). The availability heuristic is the main cause for the illusory correlation effect. Individuals’ estimate the likelihood of a relationship by the ease with which the relevant mental information is made evident. In the case of illusory correlation, people tend to judge a relationship based on the frequency of which two events co-occur. When the frequency of these events is high, the strength of the relationship is also judged to be high. Next to that, strong relations will also be judged to occur frequently. It is obvious that this process results in a systematic bias (Tversky and Kahneman, 1974).

Hindsight bias: Hindsight bias, or ‘knew-it-all-along effect,’ is the incorrect belief that one could have predicted a given outcome of a particular event once the outcome is already known (Biais, and Weber, 2009). After an event has occurred it is much easier to sort relevant from irrelevant signs. Consequently, judgments are made with the advantage of feedback about the outcomes (Hoffrage and Hertwig, 2000). The hindsight bias is a natural result of the availability heuristic because people estimate the likelihood of events by the ease with which information from the past is made evident.

Representativeness heuristic

The representativeness heuristic, or the ‘stereotype heuristic,’ is the tendency of humans to interpret a sample or event to be more illustrative of the entire population than it actually is (Wickman 2003). Probabilities and outcomes that match with stereotypes or other mental models are also judged to occur more likely (Kahneman and Frederick, 2002). Furthermore, prior probabilities are often not incorporated in decisions. Kahneman, and Frederick (2002) describe this as making judgments about an object on the degree to which the object represents, is similar to, or matches the prototypical features of a particular object class. Consequently, this leads to misinterpretation of real outcomes, which can result in several cognitive biases. In this study we investigate the most familiar and most applied ones, namely: base-rate fallacy, law of small numbers, illusion of control and regression fallacy (Tversky and Kahneman, 1974).

Base-rate fallacy: Base-rate fallacy is a result of the representativeness heuristic. It is also called ‘base rate neglect,’ in which the prior probability - or ‘base rate’ - is neglected when the probability of hypothesis A, given evidence B, is judged. People that apply this bias ignore important information in favor of one’s personal information or mental model (Stolarz-Fantino et al., 2006). This decision-making process can be explained by the way people order and prioritize information by their perceived degree of relevance. When information has a close match with the judged object, it is considered to have a higher relevance. This highly relevant information gets a higher weight than the ‘low-relevant’ information. In this case, the prior probabilities are neglected and decision-making is based on a coincidence instead of facts (Bar-Hillel, 1980)

(8)

Law of small numbers: The so-called ‘law of small numbers’ bias can be attributed due to the representativeness heuristic. People use representative information such as characteristics or observations of a small sample in order to draw conclusions about an entire population (Rabin, 2002). The complete sample size is ignored, which results in the small observations becoming representative of the entire population. Small samples are not reliable or valid if one wants to judge a complete population. Although larger samples are seldom available and costly for entrepreneurs, they should be aware of the fact that they are prone to the error of judging from such a small sample (Busenitz and Barney, 1997).

Illusion of control: In situations where people are dependent on chance and luck they are likely to overemphasize the degree to which they can influence the (uncontrollable) situation (Redhead, 2010). According to Hobbs, (2010), there are two reasons for this bias. The first is that the motivation and ambition of highly driven people give them the feeling of control. Secondly, it is sometimes hard to distinguish between good outcomes due to chance or good competences (Keh et al., 2002).

Regression fallacy: Regression fallacy, or ‘regression towards the mean,’ can be described as the incorrect assignment of causes to fluctuations where no causes can be blamed. According to Marsch and Hau (2002), it is an error in between the variables of prediction and outcomes. People like to take action when variances are at their peak. When the variables regress towards the mean, they believe that it was their action that changed the variable. It is assumed to be the most common fallacy in statistical analysis of economic data (Friedman 1992). This bias is a logical result of the representativeness heuristic because the observed variable is given more weight than it actually should have.

STUDY 2: THE INFLUENCE OF INTUITIVE AND RATIONAL THINKING ON RISK-TAKING PROPENSITY

Based on the mentioned heuristic and bias theories described above, we present our conceptual framework for our second study in figure 2. Our model proposes basically that heuristics and biases mediate the relationship between the intuitive and rational thinking systems and risk-taking propensity. We will now elaborate on these relationships.

The rational and intuitive thinking systems and risk-taking propensity

Nearly every researcher in the entrepreneurial decision-making field supports the theory that humans have two different modes of thinking, namely: the experimental and rational system (Evans, 2008; Kahneman, 2003; Smith and DeCoster, 2000; Stanovich and West, 2000; Epstein and Pacini, 1999; Sloman, 1996). These systems explain how information arrives into memory and is processed, interpreted and saved for later use (Baron, 2008). The ‘default’ mode of human thinking is called the intuitive system. Opposed to the more rational system, this system operates effortlessly, practically, automatically, intuitively and holistically.

However, in situations where problems cannot be solved on the basis of generalizations from past experience, this system does not function well. The rational system is then used, which is an evolutionarily newer system (Lindeman, 2010). This system works more slowly, more thoughtfully and systematically, is able to analyze difficult problems and abstract concepts and is related to intelligence (Lindeman, 2010).

Generally, these systems interact quite well. When they do conflict, this is experienced as an internal conflict between feelings and thoughts (Denes-Raj and Epstein, 1994). It is assumed that individuals differ in their degree of dominance between the intuitive or rational system. Individual preferences, situational factors, degree of emotional involvement and relevant experience are the factors responsible for this balance (Epstein et al., 1996)

(9)

FIGURE 2:

The influence of the intuitive and rational system, heuristics, biases on risk-taking propensity

Although each individual holds a different balance, entrepreneurs’ balance is assumed to tend towards the intuitive system. This is mainly due to the environment in which entrepreneurs are situated and the kind of strategic decisions they have to take. Entrepreneurs often are located in new, highly uncertain, circumstances. They have to deal with high time pressures whereby they have to make decisions on the basis of unavailable or incomplete information (Oaksford et al., 1996; Wyer and Srull, 1994; Fiske and Taylor 1991). It is suggested that entrepreneurs therefore use their intuitive system more frequently (Busenitz and Barney, 1997). The intuitive system is expert in generalizing from concrete schemas, scripts or experiences, which are unconsciously derived from past experiences. Through this system, situations and problems are simplified in an effective and efficient manner. It is an older, more evolutionary system than the rational one, and it is generally sufficient for solving problems based on generalizations of past experience via categorization (Epstein et al., 1992). Categorization is one of the most basic cognitive functions of the human mind (Palich and Bagby, 1995; Mount and Thompson, 1987). When individuals interpret objects or situations, they make a fast evaluation of the specific attributes of their observation. These attributes allow the decision-maker to compare the observation quickly according to their own frames of reference or ‘mental categories.’ It is theorized that heuristics are applied in this process to direct the information into a specific category. When the observation is classified, the decision can be made according to this mental category (Palich and Bagby, 1995). Which particular heuristic will be applied depends on the type of attributes observed. This could, for example, be the availability or representativeness heuristic. When this process is executed, these information-processing heuristics reduce the time and effort for decision-making. Consequently, information is assessed quickly and effectively. However, this process might lead to errors in decision-making, since the schemes are not always applied in the way they should. We hypothesize, therefore, the following:

Hypothesis 1a: The more the intuitive system is used, the higher the level of usage of the biases ‘overconfidence’, ‘illusory correlation’ and ‘hindsight bias’ emanating from the availability heuristic and the biases ‘base-rate fallacy’ ‘law of small numbers’ illusion of control’ and ‘regression fallacy’ emanating from the representativeness heuristic.

In contrast to the intuitive system, the rational system works the other way around. Almost all researchers that describe this information process describe more or less the same steps. It is quite a systematic method, which follows sequential steps in a specific order. Most authors differ in the order or number of steps, but generally

(10)

these are: 1) problem formulation; 2) formulation of criteria; 3) search for information; 4) listing of scenarios; 5) listing of solutions; 6) evaluation of alternatives; and finally 7) choice of solution (Bell, Raiffa, and Tversky, 1989). Within each stage, sub-decisions are made with the aim of proceeding to the next stage. In each stage a sub-analysis is made of the past by looking at the evidence available (Harvey, 1998). Since this process analyzes information based on evidence and logic consistently, we hypothesize the following:

Hypothesis 1b: The more the rational system is used, the lower the level of usage of the biases ‘overconfidence’, ‘illusory correlation’ and ‘hindsight bias’ emanating from the availability heuristic and the biases ‘base-rate fallacy’ ‘law of small numbers’ illusion of control’ and ‘regression fallacy’ emanating from the representativeness heuristic.

Now let us take a closer look to the process of decision-making and the important role of heuristics. As we recall, heuristics direct judgments into already existing mental models or schemes. If one assesses a specified target attribute of an object, the original target attribute becomes substituted by another element – the heuristic attribute – which originates from a mental category. This heuristic attribute comes more readily to mind and allows quick decision-making based on preceding experiences (Kahneman, 2003; Kahneman and Tversky, 1974). This process is called attribute substitution. Basically it describes how people substitute difficult questions for easier ones (Kahneman and Fredrick, 2002). The core principle of attribute substitution is that one gives an answer to a question that has not been asked. This does not have to result in serious errors though, but obviously it does when the target and heuristic attributes differ significantly (Kahneman, 2003). For example, if one judges the outcome of a soccer game, this question is replaced by ‘which team is the overall strongest?’ The answer is then given according to the second question. In this case it is not particularly arbitrary, yet one can see that this process leads to errors if the target and heuristic attribute diverge too much. Not every judgment in a target attribute is replaced by a heuristic attribute. When simple questions, such as ‘how old are you’ (known fact) or ‘how much do you like this ice-cream’ (current experience), are asked, the answers are retrieved easily from stored memory. However, when target attributes are not easily retrieved, the brain will search for other heuristic attributes that are somehow related to the target attribute. Three important conditions have to be fulfilled in order to substitute a target attribute by a heuristic attribute:

(1) the target attribute is not (easily) accessible;

(2) a related and associative-related substitute heuristic attribute is highly accessible;

(3) the critical operations of the rational system do not reject the process of attribute substitution (Kahneman, 2003).

Availability and risk-taking propensity

As described, the availability heuristic is very useful for the assessment of frequencies or probabilities (Tversky and Kahneman, 1974). Unfortunately, availability is not only influenced by the factors of frequency and probability. The essence of availability lies in the process of determining likelihoods of events by the

easiness of which a substitute heuristic can be recalled (Schwarz and Vaughn, 2002). There are several factors responsible for the easiness whereby a more available heuristic replaces a normal target attribute. This degree of accessibility depends on the characteristics of the target attribute, scene or situation observed (Kahneman, 2003). Important drivers for the easiness of recall are recent, salient, familiar and imaginable objects or events. These quickly and easily recalled events are also judged more likely to happen than those that are not (Kliger and Kudryavtsev, 2010). This makes easily imagined information more dominant in the decision-making process, whereby other (important) information with a lower speed of recall becomes neglected. A well-known example is the overestimation of the likelihood of setting up a successful business after seeing many examples in a TV documentary (Tversky and Kahneman, 1974).

(11)

This ignorance could lead to (unconscious) risky decisions, since frequencies are not predicted by their actual probabilities (Keller, et al., 2006; Kahneman 2003; Tversky and Kahneman 1973). Consequently, the actual risk involved in situations turns out to be underestimated. As we recall, the availability heuristic may result in several specific judgmental errors, such as overconfidence, illusory correlation and the hindsight bias. We will now discuss the specific consequences of these biases and their effects on risk-taking propensity.

Overconfidence: If one’s certainty regarding estimates exceeds the correct accuracy of those predictions this may result in unnecessarily risky decisions (Simon and Houghton, 2003). In order to make predictions about the risk of specific actions, one makes use of previous signs. Overconfidence arises if one overestimates the predictive reliability of such a signal. In standard decision-making circumstances, such a signal has a relatively high predictive validity due to the similarity of a situation. However, in uncertain situations, these signals may be a more fragile predictor of success (Soll, 1996). In order to make a prediction though, one must rely on these available signals. In general, signals about failures and disappointments are more difficult to find. Positive signals associated with good achievements will likely be more available, even if they are not representative of the situation (Simon and Houghton, 2003). A decision will thus be based on a few salient examples of past situations associated with its positive outcomes. As a result, the salient positive signals dominate one’s decision, which result in a lower perception of the risks involved (Simon and Houghton, 2003). This makes one more prone to taking risks (Sitkin and Weingart, 1995). Overconfidence seems, therefore, to be a specific source of the underestimation of the riskiness associated with proposed actions (Simon and Hougton, 2003)

Hindsight bias: The inability to remember one’s prior expectations correctly after observing new information could hold several risks. Although the likelihood of probabilities was equal at some point in time, the decision-maker fails to remember his or her initial expectations when the outcomes become prevalent. New knowledge is confined to the past, escorted by the ignorance that information about outcomes influenced this opinion. Consequently, there is a considerable discrepancy between the initial expectation and the post expectation, whereby the latter will be closer to the actual outcome (Biais and Weber, 2008). Decisions that will be taken according to these opinions might hold several risks. First, the decision-maker is unable to make an accurate assessment of its own actions. Second, this bias will exclude the decision-maker from learning from past experiences. Since decision-makers are unable to be surprised of outcomes but instead make use of backwards adjusting and correcting techniques, they systematically fail to recognize that their initial risk perception was wrong. Hence, by sticking to this incorrect risk perception, they don’t stop or proceed with certain actions at the right time (Biais and Weber, 2008).. For example they might fail to cut losses when they should do so. As can be seen, this incorrect perception might lead to unnecessary risks. Several researchers have found that due to this stubbornness to correct one’s risk perception, risky actions are executed nonetheless (Nutt, 1993; Staw, 1991 and Lieberman and Montgomery, 1988).

Illusory correlation: Recognizing a relationship between two events or two objects, when no such relationship exists, might hold several risks. The simultaneous appearance of two statistically infrequent events might become subject to greater recognition. Consequently, the co-occurrence of these two events will be overestimated since they are highly available and easier to recall (Eder et al., 2011). Distinctive information is thus an important cause for the illusory correlation effect. The more attention a distinctive combination gets, the more the misperception arises that a relationship exists (Sherman et al., 2009). The essence of illusory correlation thus lies in the fact that individuals are more sensitive to differences than ratios. Existing stereotypes mediate new information, resulting in the wrong perception about actual correlations. This makes one prone to see unreal relationships and blind for to the actual risks involved (Sherman et al., 2009). One

(12)

overestimates its judgments about a relationship, and becomes insensitive towards disconformities. Next to that, it hampers one from exploring other possible information sources (Hogarth and Makridakis, 1981). If one assesses the likelihood of two events occurring more likely than their actual chance, the actual risks involved are underestimated and fake relationships are overestimated. As a result, one is more prone to take a risks. Hereby, one’s risk-propensity increases.

As described above, the availability heuristic produces decisions based on the ignorance of information. Information that is more easily retrieved dominates the decision-making process, whereby other important information with a lower speed of recall is neglected. Due to the cognitive errors resulting from the availability heuristic, we propose that a higher risk-taking propensity could be the result. Therefore, we hypothesize the following:

Hypothesis 2a: The level of the biases ‘overconfidence,’ ‘illusory correlation’ and ‘hindsight bias’ emanating from the availability heuristic will be positively related with the level of entrepreneurial risk-taking propensity.

Representativeness and risk-taking propensity

Let us now take a closer look at the representativeness heuristic and its relationship with risk-taking propensity. The representativeness heuristic implies that the probability of A belonging to B is assessed by the degree to which A is representative of, or similar to, the heuristic attribute of B. As one can see, this process might lead to serious errors, because similarity, or representativeness, is not influenced by the factors that should affect judgments of risks (Tverky and Kahneman, 1971). The reliance on this representativeness heuristic could lead to error, since the decision is based partly on wrong information. According to Wickham (2003) there are four situations in which individuals use the representativeness heuristic:

(1) a particular sample is judged according to an entire range of populations; (2) a specific population is judged according to one particular sample;

(3) probability distributions are judged according to other distribution processes; and

(4) the probability of a sequence of events is judged according to the process of their generation.

In each of these four situations, one’s intuition is used in order to judge the representativeness of the heuristic attribute or process for the target attribute (Wickham, 2003). In other words, an event will be judged to happen more likely if it matches well with the representative heuristic attribute.

There are several factors responsible when a representative heuristic replaces a normal target attribute. These are physical characteristics, such as size, shape and distance, or more abstract characteristics, such as similarity, causal propensity, surprisingness, affective links, and mood (Kahneman and Frederick, 2002). A well-known example of the representativeness heuristic is judging distance of an item in part by the extent of its visibility. As can be seen, distance is replaced by visibility, and when these two attributes hold different measures, an error occurs. As we recall, the representativeness heuristic may result in several specific judgments of error, such as ‘base-rate fallacy, ‘law of small numbers,’ ‘illusion of control’ and ‘regression fallacy,’ We will now discuss the specific consequences of these biases and their effects on risk-taking propensity.

Base-rate fallacy: If the prior probability or ‘base rate’ is neglected when the probability of hypothesis A given evidence B is judged, important information about evidence B is ignored. This, in favor of one’s personal information about hypothesis A (Stolarz-Fantino et al., 2006).

This is caused by the process of ordering information by perceived degree of relevance. In this process high-relevance information is dominated by low-high-relevance information. This higher relevancy is attained when

(13)

specific information or causes are provided about a smaller set of attributes that belong to the entire sample. The essence in this phenomenon lies in the fact that base rates seem irrelevant at the time the judgment is made. The base-rate fallacy shows how specific or related information may cause a failure in predicting probabilities, since the existing information of a statistically higher predictive validity is ignored. For entrepreneurial decision-making this implies that one may ignore risks, since the actual statistics of probabilities are ignored. Consequently, we expect that one’s risk-taking propensity will be higher (Bar-Hillel, 1980).

Law of small numbers: If one overstresses incorrectly the similarity of a small sample compared with its parent population, a biased decision may result (Tversky and Kahneman, 1971). The probabilities resulting from a small sample are used in order to make inferences about probabilities of the entire parent population. However, the judged likelihood of this small sample is independent of the sample size. Although small samples are more likely to deviate from the statistics of a parent population, this concept is often not part of people’s intuitive judgments (Tversky and Kahneman, 1971). Intuitive judgments are dominated by the sample proportion and basically the size of the sample is ignored. Since the size of the sample plays a significant role in the determination of the actual probabilities, one underestimates the impact of evidence. Moreover, the reliability of the results of small samples is overestimated (Rabin, 2002). These percepts could create a higher risk-taking propensity, since important facts are ignored, and small pieces of evidence are overestimated. Assuming that entrepreneurs possess only small samples, it is very difficult for them to recognize and predict the actual risk involved.

Illusion of control: One’s belief and perception in the ability to influence and control outcomes, which are largely uncontrollable and determined by chance, can have significant consequences (Redhead, 2010; Thompson et al., 1998; Langer, 1975). Mostly this bias occurs in situations where skill and chance are easily mixed up. A chance situation with typical elements for skill-based situations (such as familiarity, choice, involvement, or competition) gives people the idea that they can influence an outcome. Subsequently, they become more confident about their actions, although they cannot influence the natural probabilities of a situation (Thompson et al., 1998). This increases one’s optimism, self-enhancement and action orientation (Fast et al., 2009). Due to one’s sense of control, the optimism regarding risks increases, which may lead to a higher willingness to take risks. Menkhoff et al., (2006)’s research showed that a heightened sense of control increases one’s optimism in viewing risks and their propensity to engage in risky behavior.

Regression fallacy: The incorrect projection of causes to statistical artifacts, where no causes are accountable, comes with a price. The error in this process lies in the mistake of ignoring the natural fluctuations of variables. Variables of stocks, temperatures etc. regress generally towards a specific mean. If one takes actions when variances are at their peaks, and believes it is due his action that a specific variable regressed towards the mean, a wrong estimation leads to uncontrolled risks. It is an error in risk assessment between the predicted variable and the outcome variable (Marsch and Hau, 2002). Since decisions are taken at the wrong times, and no attention is paid to the natural cycle of certain developments, one is more receptive to take risks. Due to the overestimation regarding one’s actions and the ignorance of natural fluctuations, one’s risk-propensity thus increases.

As described above the representativeness heuristic produces decisions based on different information. Other information that is more representative dominates the decision-making process, whereby the information that should be taken in to account is neglected. This reliance on the representativeness heuristic could lead to serious errors, since the decision is based partly on wrong information. Due to the cognitive errors resulting

(14)

from the representativeness heuristic, we propose that a higher risk-taking propensity could be the result. Therefore, we hypothesize the following:

Hypothesis 2b: The level of the biases ‘base-rate fallacy’, ‘law of small numbers’, ‘illusion of control’ and ‘regression fallacy’ emanating from the representativeness heuristic will be positively related with the level of entrepreneurial risk-taking propensity.

METHODS Sample

In order to identify entrepreneurs for this study, several sources were used: (1) David Silvers’ list of the 100 greatest entrepreneurs of the last 25 years (Silver, 1985); (2) Ernst & Young’s list of national winners of the Entrepreneurs of the Year awards; and (3) VentureOne’s list of 6,359 founders of venture-backed firms. For the survey, 1,500 entrepreneurs with complete contact information were randomly selected. The total design method for survey research of Dillman (1978) was used in designing the research. The first mail package to these 1,500 entrepreneurs consisted of a personalized letter, a project outline, the survey, a priority postage-paid envelope with an individually-typed return address label, and a list of research reports available to participants. It was sent by priority mail. Unfortunately, 324 were returned due to undeliverable addresses or names, and so the sample was reduced to 1,176 entrepreneurs. After this first package, four follow-up mailings were sent to the entrepreneurs and, follow-up letters were sent one week later. The entrepreneurs that not did respond received a second package with the same content sent in the first correspondence. After two additional follow-up letters, 289 completed questionnaires were received from the entrepreneurs, representing a response rate of 24.6% (289/1,176).

Measurements

For the survey, existing cases and scales from literature were used were possible. The measurements were strengthened by developing additional entrepreneurial cases based on the existing cases. Since the survey had limitations in terms of length, the items from the scales had to be selected. Hence, those with the highest factor loadings were chosen. When there were no scales available, items from the definitions were deducted from examples and ideas in the existing literature. Because of these changes the survey had to be piloted, which was conducted by thorough interviews of 12 entrepreneurs.

In order to make changes in the expression and language of instructions, cases and items several actions were taken. First, the entrepreneurs were asked to discuss the background of their ventures, how they started, how they discovered the opportunity and how the business idea developed over time. This was done in order to get a better understanding of their answers in the questionnaires. In addition, the protocol method was used to ask entrepreneurs to ‘think aloud’ as they filled out the English questionnaire (Hunt, Sparkman, Jr., and Wilcox, 1982). The interviews were recorded and transcribed by two researchers. Consequently, a better questionnaire was developed because words, instructions and items were adapted. Appendix A2 provides the construct reliabilities, the response format employed in the questionnaire, and the details of the measurement items used in this study.

Dependent variable: Risk-taking propensity

Risk-taking propensity is the dependent variable in our structural equation model. We measure risk-taking propensity using the ‘certainty equivalent approach,’ which intends to express the individual form of the entrepreneur’s utility curve (Mullins, Forlani and Walker, 1999; Schneider and Lopes, 1986; Kahneman and Tversky, 1979). This approach gives entrepreneurs two options according to several scenarios, namely: one certain choice or a risky one with the same expected value. Risk-seeking is thus preferred for uncertain

(15)

outcomes, whereas it is the opposite for risk aversion. The measure holds five scenarios, each measuring risky actions on a different level of the expected value. This measure is obtained from Mullins et al., (1999) and Schneider and Lopes (1986). The scores between 0 and 5 represent varying degrees of risk-taking propensity. This measure was complemented by two other risk-taking propensity measures, also based on the certainty equivalent approach. Nevertheless, entrepreneurs have to choose between nine risky options and one certain option; all the options have the same expected value again. The risky options are classified according to their riskiness influenced by the percentage of the total sum that entrepreneurs can win and lose. As a result, a very precise estimate of a point on the entrepreneur’s utility curve can be drawn. This point is enough to estimate the form of the utility curve of the entrepreneur using the approximation by the exponential function (e.g., Walls and Dyer, 1996). The Cronbach α for all three measures of risk-taking propensity is 0.87.

Mediating variables: Cognitive biases

Hindsight bias: Hindsight bias occurs when entrepreneurs remember their predictions about a former event more accurately than they actually were. The survey began by asking entrepreneurs to answer a number of questions and rate the probability of their answers to these questions being correct. The questions were difficult general knowledge questions with two response options: correct and incorrect. At the end of the survey, we gave respondents the correct answers to the knowledge questions and asked them to remember their estimates for correctness in hindsight (without looking at the first page of the survey). The hindsight bias manifests itself when respondents originally gave the incorrect answer and lowered their estimate for correctness in hindsight. Thus, the larger the difference between the original estimate and the estimate in hindsight, the bigger the bias. We follow Bukszar and Connolly (1988) and Slovic and Fischhoff (1977) with this procedure. According to Campbell and Tesser (1983), there should be at least 30 minutes between an original and hindsight judgment. It took the participants about 40-45 minutes to fill out our questionnaire, and therefore the potential memory bias is not a problem in our study. We used a three-item scale for hindsight bias (α=0.75).

Illusory correlation: Illusory correlation takes place when entrepreneurs see a co-occurrence between two events, when no such co-occurrence exists. The items are based on the ideas of Tversky and Kahneman (1974). Common myths about co-occurrences are used in, for example, a cat has been spayed or neutered and its weight, and between university licenses and the larger size of a company. The three-item scale has a Cronbach α of 0.67.

Overconfidence: The aforementioned general knowledge questions and the estimates of the probability that the answers are correct were used to measure overconfidence. The procedure of Forbes (2005), and Brenner et al., (1996) was used to develop a three-item scale (α=0.82) for overconfidence, however other knowledge questions were also used, because questions from literature are somewhat outdated. The more certain respondents are that they gave the correct answer when in fact they are wrong, the higher level of overconfidence they have.

Base-rate fallacy: Base-rate fallacy occurs when irrelevant case information is used to make judgments in favor of available statistical information. Two cases – both based on Lynch and Ofir (1989) - were used to measure the base-rate fallacy (α=0.73). In the first case, respondents have to make an estimate of the probability that a given high-tech firm will fail within the first five years. The case description began with statistical information (the base-rate) about high-tech firms’ failures (60%). Irrelevant information was also given about the founder’s hobbies and social life. When respondents deviate in their predictions from 60%, they exhibit base-rate fallacy. The second case is about purchasing a five-year old car. Similar to the first case study, we start by provision of statistical information: ‘Consumer Reports’ suggest that there is a 50%

(16)

probability that such a car will require major repairs in the sixth year. Irrelevant case information concerning the color and the interior of the car was also given. Respondents were asked to predict the likelihood that the car requires major repairs during the next year and when they deviate from 50% they show a base-rate fallacy. The more they deviate, the higher the bias.

Illusion of control: Illusion of control means that people perceive objectively uncontrollable events as being within their control. This construct was measured by a five-item scale (α=0.88) based on Simon et al., (2000) and Zuckerman et al., (1996). Items are concerned with, for instance, the accuracy of predictions of future market developments and the perception that everything that happens is a result of the respondent’s own doing. The more that respondents think they can predict the market accurately, or that what happens is always a result of their own doing, the higher level of illusion of control they exhibit.

Law of small numbers: The law of small number bias arises when people make their judgments on the basis of a (small) sample, while not taking into account the actual size of this sample. The three-item scale (α=0.86) is based on Simon et al., (2000) and Mohan-Neill (1995). Items concern basic strategic decisions on the opinion of closest friends and colleagues, on only one source of information, or not basing such decisions on large-scale market research. The higher respondents score on these items, the greater the law of small numbers bias they exhibit.

Regression fallacy: Regression fallacy concerns an erroneous causal interpretation of regression to the mean. In such situations, there are always two related measurements: one that is extreme, and therefore attracts attention; and another that is closer to the mean. Regression fallacy was measured by a case that is based on an example of Kahneman and Tversky (1973). The case describes a stable economic environment, which is not likely to have grown naturally. The firm’s sales increased by 15% two years ago and decreased by 5% one year ago, thus bringing the sales closer to the mean. In order to grow further, the firm increased its advertising budget last year by 25%. As we know, despite that, its sales decreased by 5% due to regression to the mean. Without the advertising campaign, the firm’s sales could have decreased by more than 5%. When respondents conclude that advertising was not effective, they give a causal interpretation of sales decrease in the last year and, therefore, exhibit regression fallacy.

The coding algorithm that is used to recode the hindsight bias, overconfidence, base-rate fallacy and regression fallacy measurements for further analysis is described in Appendix A2.

Independent variables

The rational system: The rational system was measured (α=0.94) by the ‘need for cognition’ scale (Epstein et al., 1996; Pacini and Epstein, 1999). The rational system scale taps the extent to which entrepreneurs are good at, and rely on, in-depth, hard, and logical thinking.

The intuitive system: The intuitive system (α=0.98) is measured by the ‘faith in intuition’ scale (Epstein et al., 1996; Pacini and Epstein, 1999). The intuitive system scale taps the extent to which entrepreneurs rely on their gut feelings and instincts, and believe in their hunches. The items that were selected for our systems are scales from the latest version of the ‘need for cognition’ and ‘faith in intuition‘ instrument based on the highest factor loadings and correspondence to the conceptual domains listed in the definitions of the rational and intuitive systems (Pacini and Epstein, 1999).

(17)

ANALYSIS Common method bias

A number of procedural remedies were used to reduce potential common method bias (Podsakoff et al., 2003). In this study it was not possible to make use of archival data or have different respondents. A second-best option, therefore, was used by varying the types of measures. Besides Likert scales, cases for some biases were used, as were more objective utility curve-based measures for risk-taking propensity. Likert-scale items were shuffled as well. Finally, the common method bias was tested statistically. The second-smallest positive correlation among the observed variables provides a conservative estimate for the common method variance (Malhotra, Kim and Patil, 2006). In this dataset it was the correlation between the age of the entrepreneur and the number of times the entrepreneur was involved as an early-stage employee (r1=0.01, p=0.75). The Harman's single-factor test (Podsakoff et al., 2003) was performed as another statistical check. All constructs were forced into one factor model in a confirmatory factor analysis (CFA). The χ2/df=16.8, indicates a very bad fit and significantly worse than the fit of the measurement model (table 2). It can be concluded, therefore, that there is no significant common method bias in this database. The descriptive statistics are presented in Table 1. Cronbach’s alphas ranged from 0.73 to 0.98, except for one, which suggests good reliabilities (Nunnally, 1978). Note that overconfidence and hindsight bias have a high correlation, since these constructs were intentionally linked in these measurements2.

First-order confirmatory factor analysis

In order to evaluate the construct validity of the suggested measurement model, a first-order confirmatory factor analysis was executed. In this model all independent and mediating variables were tested with a metric measurement scale (Hair et al., 1998). For this test, the maximum likelihood estimation method in LISREL 8.80 was used (Jöreskog and Sörbom, 2001). Each construct that loaded on multiple constructs or had low item-to-constructs loadings was deleted. The results of the CFA measurement model showed that the model fit indices are statistically satisfactory at a highly significant level (P<0.001). The model indices present the following results: χ2; 744.31, χ2/df: 1.88. GFI; 0.87 CFI; 0.97 NFI: 0.94 NNFI; 0.97 RMSEA; 0.050 and RMR 0.054 (Hair et al., 1998).The standardized loading of each item was greater than 0.5, which indicates that the scales show decent convergent validity (Fornell and Larcker, 1981). Moreover, no inter-factor correlations have a confidence interval containing a value of one (p<0.01) and all item-level correlations between constructs are insignificant. Thus it can be concluded that the scales possess discriminant validity (Bagozzi, Yi and Philips, 1991). The results are presented in Table 2.

1

Kendall's τ correlation coefficient

2

Neither the VIF-based nor the condition index-based tests indicated any substantial multicollinearity effects (Hair et al.,1998). The VIF for proportion for overconfidence and hindsight bias was 0.21 and 0.18 respectively (while the cut-off value is >0.5).

(18)
(19)

TABLE 2

First order Confirmatory Factor Analysis

Construct Factor Loading T-value Item Hindsight bias Hindsig1 0.68 11.56*** Hindsig2 0.74 13.71*** Hindsig3 0.70 12.80*** Illusory correlation Ilcorel1 0.63 9.92*** Ilcorel2 0.54 8.12*** Ilcorel4 0.77 11.88*** Overconfidence Overcon1 0.83 15.36*** Overcon1 0.85 17.39*** Overcon1 0.67 12.26*** Base-rate fallacy Baserat1 0.54 7.15*** Baserat2 1.07 9.29*** Illusion of Control Ilcontr1 0.79 15.50*** Ilcontr2 0.87 18.05*** Ilcontr3 0.65 11.96*** Ilcontr4 0.73 13.97*** Ilcontr5 0.82 16.40***

Law of small numbers

Smnumb2 0.86 16.80*** Smnumb3 0.71 13.18*** Smnumb5 Regression Fallacy Regfa1 Regfa2 0.85 0.82 0.88 16.65*** 14.21*** 15.15*** Intuitive system Expsys1 1.00 24.00*** Expsys3 0.80 16.40*** Expsys4 0.99 23.51*** Expsys5 0.99 23.55*** Expsys6 0.99 23.48*** Rational system Ratsys2 (R) 0.74 14.78*** Ratsys3 0.65 12.47*** Ratsys4 0.99 23.65*** Ratsys5 0.98 23.14*** Ratsys6 (R) 0.94 21.38***

Significance levels are based on unstandardized coefficients * p<0.05; ** p<0.01; *** p<0.001

Df χ2 χ2 /Df RMSEA RMR GFI CFI NFI NNFI

(20)

20

RESULTS Second-order confirmatory factor analysis

A second-order CFA was executed to determine whether the seven factors (biases) load reasonably on one of the two higher-order factors (heuristics). Table 3 presents the findings for this second-order CFA model. According to the model indices, this model is statistically satisfactory. The model indices present the following results: χ2; 434,92, χ2/df: 2.46 GFI; 0.89 CFI; 0.95 NFI: 0.92 NNFI; 0.94 RMSEA; 0.065 and RMR 0.073 (Hair et al., 1998). The standardized solution of the items shows moderate results. Except for base-rate fallacy, that is found to be insignificant, the standardized loadings are greater than 0.5 at a highly significant level (P<0.001). This indicates that the scales show decent convergent validity (Fornell and Larcker, 1981). Illusory correlation is the only factor with a moderate loading lower than 0.5, namely; 0.38. Yet this factor is highly significant (P<0,001). So we can indeed conclude that (1) hindsight bias, illusory correlation and overconfidence’ result from the heuristic availability; and that (2), illusion of control, law of small numbers and regression fallacy result from representativeness.

TABLE 3

Second order Confirmatory Factor Analysis

Construct Factor

Loading T-value Item

Representativeness

Illusion of control 0.47 5.63*** Law of small numbers 0.58 6.08*** Base-rate fallacy 0.12 1.41 Regression fallacy 0.57 5.95*** Availability Illusory Correlation 0.38 4.28*** Overconfidence 0.73 3.63*** Hindsight bias 0.80 2.88**

Significance levels are based on unstandardized coefficients * p<0.05; ** p<0.01; *** p<0.001

Structural equation modeling

The second part of this study consisted of two hypotheses. Overall we wanted to prove that heuristics and biases mediate the relationship between the intuitive and rational systems and risk-taking propensity. Since the second part of our study has a mediating character, the structural equation modeling (SEM) will be used to test the hypothesis. The full model will be based on the measurement model of the first-order CFA. Further to that, covariance is allowed between overconfidence and hindsight bias and between their error terms, because of the related measures of these two constructs. The model presented a good fit between the theoretical model and the empirical covariance’s provided by the sample (Hair et al., 1998); χ2;1078,61 χ2/df: 2,15 GFI; 0,83; CFI; 0.96

Df χ2 χ2 /Df RMSEA RMR GFI CFI NFI NNFI

Referenties

GERELATEERDE DOCUMENTEN

The presented approach for a target oriented integration of Industrie 4.0 in lean production systems integrates design thinking elements into the value stream mapping

In Table 1 the first column refers to the name of the dataset (for detailed information please check [19]), the second column shows the number of instances that form

The aim of this study therefore was to develop a family physician impact evaluation tool to evaluate the perceived impact of family physicians on health system performance and

Trevelyan (2008) has discovered that overconfidence and risk taking propensity are not related. The current research will widen this field of research with the

Moreover, our schemes also out- perform the plain network coding based transmission scheme in terms of power saving as long as the receive energy of the devices is not negligible..

The second, indirect costs, are the underpricing costs, also known as “money left on the table.” Investors are prepared to pay more “money” than the initial offer price, and

The intervention study was registered at clinicaltrails.gov as NCT01920646 with the title ‘Effect of African Leafy Vegetables on nutritional status of South African school

The downside of making medicines in advance is that every order that is made in advance has a probability to be wasted, resulting in additional costs for the pharmacy. In this