• No results found

DRAFT VERSION 2.0

N/A
N/A
Protected

Academic year: 2021

Share "DRAFT VERSION 2.0"

Copied!
48
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Entrepreneurial Risk Taking

Cognitive Processes and Biases to Explain Risk Taking Propensity

By

Bastian Bergmann

DRAFT VERSION 2.0

Master Thesis: MSc in BA, specialization: Business Development University of Groningen

Faculty of Economics and Business First Supervisor: Dr. J.D. van der Bij Second Supervisors: Prof. dr. D.L.M. Faems /

Drs. G. Balau August 2012 Bastian Bergmann Ostdamm 120 48249 Dülmen, Germany (06)26288725 ba.bergmann@web.de Student number: s2012723

(2)

ABSTRACT

The study investigates how rational and intuitive thinking as well as cognitive biases resulting from heuristics relate to entrepreneurial risk taking propensity. More precisely, I investigate the effect of intuitive and rational thinking systems on illusory correlation and overconfidence resulting from the availability heuristic and on base-rate fallacy, sample size fallacy, illusion of control and regression fallacy emanating from the representativeness heuristic. Furthermore, the effect of these biases on risk taking propensity is analyzed. The study was conducted among 289 entrepreneurs from the United States of America.

I found that entrepreneurs operating in the intuitive thinking system are more likely to be affected by cognitive biases and thus more prone to risk taking. Furthermore, rational thinking system lowers the effect of some cognitive biases, namely overconfidence, regression fallacy and illusion of control, and thus decreasing the entrepreneurial risk taking propensity. Besides, rational thinking was also found to lower risk taking propensity via a direct path from the thinking system to risk taking propensity.

Introduction

Risk taking has been largely attributed to entrepreneurs (i.e. Wagner, 2003; Ekelund, Johannson, Järvelin, & Lichtermann, 2005), because as Liles (1974) suggested, these people put their financial well-being, career opportunities, family relations and psychic well-being at stake. Furthermore, risk taking is a necessity for new venture creation, as entrepreneurs take the full responsibility for their behavior (Gasse, 1982) and entrepreneurial activities consist of unstructured as well as largely unknown and uncertain set of alternatives (Bearse, 1982). In the same vein, some authors suggest that risk tolerant people decide to become entrepreneurs, whereas risk-averse individuals rather choose for contractual employment (Stewart, Jr., Watson, Carland, & Carland, 1999; Carland, Carland, Carland, & Pearce, 1995). The meta-analysis conducted by Stewart Jr. & Roth (2001; 2004) gives further support to the tenure that entrepreneurs are indeed more risk taking than managers and thus weaken the contradicting literature stream embedded in the motivational achievement theory, suggesting that there is no difference between entrepreneurs and managers in risk taking (i.e. Perry, 1990; Brockhaus,

(3)

1980; Masters & Meier, 1988; Miner & Raju, 2004; see also Cramer, Harto, Jonger, & van Praag, 2002; Caliendo, Fossen, & Kritikos, 2009).

Having accepted the importance of risk taking propensity for entrepreneurship compared to managers, research staying within the group of entrepreneurs however suggests that entrepreneurs should not be naively considered to be equally prone to risk taking as a whole. For example, Sandner & Spiegel (2010) have criticized that most of the studies on entrepreneurial risk taking propensity have considered entrepreneurs as a homogeneous group, rather than heterogeneous, suggesting that there is a difference in risk taking propensity for example between opportunity entrepreneurs and necessity entrepreneurs. Even earlier research proposed that entrepreneurs may differ among each other as entrepreneurs are assumed to differ from non-entrepreneurs (Gartner, 1985; Wortman, 1987). Therefore, more research is needed that addresses the heterogeneity of risk taking among entrepreneurs.

The study at hand strives to contribute to the understanding of intra-group entrepreneurial risk taking, thus following the request of Sandner & Spiegel (2010) to assume a heterogeneous group of entrepreneurs. Furthermore, I follow the request of Shaver & Scott (1991) to integrate cognitive processes of people to understand new venture creation and the request of Stewart Jr. & Roth (2004) to explain entrepreneurial risk taking by cognitive determinants. In fact, most studies that identified determinants of risk-taking propensity, such as age, inertia, outcome history, risk preferences / perception, gender differences or personality (Killgore, Grugle, Killgore, & Balking, 2010; Nicholson, Soane, Fenton-O'Creevy, & Willman, 2005; Krueger & Dickson, 1994; Simsek, 2007; Sitkin & Pablo, 1992; Sitkin & Weingart, 1995), have taken an one-sided cognitive view, meaning that they assumed rational-conscious thinking. This, however, appears to be unrealistic, as people possess bounded rationality (Simon, 1955). Cognitive research has come so far nowadays to agree on the fact that besides rational-conscious, another, unconscious or intuitive dimension plays an important role that drives behavior (i.e. Simon, 1955; Freud, 1960; Epstein, 1994; Kahneman, 2003). In fact, entrepreneurship has been mostly associated with intuitive thinking (Aquino, 2005; Blume & Covin, 2005; Simon, Houghton, & Aquino, 2000; Runco, 2004) and thus, I propose that a one-sided rational-conscious view is not sufficient to understand entrepreneurial risk taking.

All in all, risk taking is an important aspect of entrepreneurship and far from being fully understood. Therefore, I strive to add value to current literature by using a cognitive psychological approach to explaining what drives risk taking propensity, assuming that:

(4)

1. Risk taking propensity is important for entrepreneurship

2. Cognitive psychology can help to explain risk taking propensity

3. Entrepreneurs are a heterogeneous group (conscious vs. unconscious thinkers)

The underlying literature stream about cognitive psychology used in this paper is largely dominated by the work of Tversky and Kahneman as well as by the work of Epstein.

The cognitive-experience self-theory developed by Epstein and colleagues (i.e. Epstein, 1990) proposes that two different thinking processes are applied by people: the intuitive thinking system and the rational thinking system. Tversky, Kahneman & collegues (i.e. Tversky & Kahneman, 1974; Kahneman, 2003) suggested that the intuitive thinking is linked to heuristics, a mental shortcut that allows for making decisions quicker and more efficient. In turn, heuristics might lead to cognitive biases, which is defined as possibly but not necessarily wrong judgments due to perceptual distortion. On the opposite side, the rational thinking system is not directly linked to heuristics, but is rather slow and addresses situations in its complexity (Kahneman, 2003). Thus, I propose that intuitive thinking is positively related to cognitive bias resulting from heuristics and that those cognitive biases enhance risk taking propensity. Furthermore, I suggest that rational thinking is negatively associated with cognitive biases and thus related to risk taking propensity.

As a result, I defined the following three research questions:

1. How do rational and intuitive thinking relate to cognitive biases? 2. How do cognitive biases relate to risk taking propensity?

3. How do rational and intuitive thinking indirectly relate to risk taking propensity? To sum up, I will add value to entrepreneurial risk taking literature is various ways. First, I will contribute to the literature stream about antecedents of risk taking by adopting a two-sided cognitive view; more precisely by incorporating both, the conscious (rational thinking) and unconscious (intuitive thinking) approach to making decisions and therefore assuming that entrepreneurs are indeed heterogeneous with regard to their preferred, but not mutually exclusive, thinking styles. Furthermore, the cognitive approach to explain risk-taking of entrepreneurs allows for a heterogeneously risk taking among entrepreneurs, rather than assuming that all entrepreneurs have a high risk taking propensity. Thus, I will contribute to the understanding of risk taking within the group of entrepreneurs. Finally, I will introduce three more cognitive biases to the field of entrepreneurial research.

(5)

the hypothesis ensue; more precisely the expected relationship between the thinking systems and biases and between the biases and risk taking propensity is outlined. The third section provides the methodological part, followed by the results. The paper is rounded off by a discussion - , managerial implications -, and limitations-section.

Conceptual Development and Background

Long time, human beings have been assumed to be rational thinkers. In business relat-ed research, theories about homo oeconomicus or expectrelat-ed utility theory are just the acme of human rationality. In 1955, Simon called for a revolution in economic theory that assumes the human being to act rationally, questioning the assumptions that (1) people have knowledge about all relevant facts concerning a decision, (2) they have clear and stable sys-tems of preferences and (3) they have the skills to compute and calculate the outcome of all possible alternatives. A negative tenure was often associated with bounded rationality, as it intended to explain the inability of human beings: limited computational capacities and lim-ited access to information (Fiori, 2011). Also, it has been equated with irrationality frequently (i.e. Chater & Oaksford, 1992; see Lopes, 1992). However, heuristics or bounded rationality have great advantages, as speeding up (Gigerenzer, 1997; Kahneman, 2003) or reducing the complexity of decision making (Tversky & Kahneman, 1974; Pitz & Sachs, 1984).

Heuristics and cognitive biases

In their revolutionary work, Tversky & Kahneman (1974) looked at the concept of bounded rationality developed by Simon (1955) from a slightly different angle, trying to ex-plain why people sometimes make apparently irrational decisions. They concluded that peo-ple rely on heuristics in their decision making process (i.e. Tversky & Kahneman, 1974; Kahneman, 2003). A judgment under heuristic occurs, when people do not derive a judgment from the actual object or attribute of interest but from another similar object or attribute – the heuristic. The authors suggest that people are prone to three different heuristics: representa-tiveness, availability, and adjustment & anchoring heuristics, of which the two former ones are the focus of this paper. Decisions being affected by heuristics might lead to different er-rors in judgment, also known as cognitive biases (Kahneman & Frederick, 2002). All biases included in this paper (see Table 1) arise because people either rely on the availability or the representativeness heuristic, meaning that those people affected by cognitive biases base their

(6)

decisions on statistically invalid information (see Tversky & Kahneman, 1974). This implies that judgments or predictions are more likely to be imprecise, compared to judgments or pre-dictions not affected by cognitive biases. The biases occurring from the availability heuristic (overconfidence and illusory correlation) as well as two biases resulting from the representa-tiveness heuristic (base-rate fallacy and sample size fallacy/ law of small numbers, referred to sample size fallacy from now on) lead to higher risk taking propensity due to the usage of wrong and possibly biased information. The other two biases resulting from the representa-tiveness heuristic (regression fallacy and illusion of control) lead to higher risk taking pro-pensity as the controllability of outcomes is overestimated.

The availability heuristic implies that people make decisions on the basis of examples that come easily to mind. Independent of what influences availability, only limited infor-mation – this that comes easily to mind – is used to make judgments. Tversky & Kahneman (1974) describe the heuristic with an example of assessing the probability of heart attacks among middle aged people. Asking a random person for his judgment, he/she would most probably recall the middle aged people with heart attacks in his/her acquaintance to derive the overall probability that middle aged people suffer from heart attacks. Using the availability heuristic possibly leads to biases or errors in judgment, of which two of them are included in this paper: overconfidence (i.e. Russo & Schoemaker, 1992) and illusory correlation (Tversky & Kahneman, 1974).

Overconfidence is best explained by the concept of meta-knowledge. The term de-scribes the „appreciation of what we do know and what we do not know“(Russo & Schoe-maker, 1992; see also Lichtenstein & Fischoff, 1977). In other words, meta-knowledge tells us if we have enough information to make decisions. In case we do not have a certain amount of meta-knowledge that shows us the limits of what we know, people become overconfident about their knowledge. According to Russo & Shoemaker (1992), overconfidence is a direct result of the availability heuristic. The availability bias results in people being overly confi-dent about conclusions they draw from information they already possess. When people are not able to consider other information, they use the one that they can easily access and over-estimate their precision (Russo & Shoemaker, 1992). Overconfidence is often attributed to entrepreneurs (Simon & Houghton, 2003), because it potentially could generate enthusiasm and persistence, which are needed to succeed in risky situations (Simon & Shrader, 2012). However, other researchers also concluded that being too overconfident leads to inflexibility, inadequate resource deployment and no satisfactory information search (i.e. Hmieleski & Baron, 2009; Hayward, Shepherd, & Griffin, 2006).

(7)

Illusory correlation refers to the error of assuming a high frequency of co-occurrence between two events, even when they are not correlated. For example, if people can easily think of an association between two events, they also would overestimate the frequency of co-occurrence (Tversky & Kahneman, 1974). Chapman & Chapman (1967) give the exam-ple of a clinical diagnosis and a draw of a patient. After having provided some diagnosis and the related draw, people tended to overestimate the co-occurrence of suspiciousness and pe-culiar eyes, because suspiciousness was wrongly associated with pepe-culiar eyes in the head of the respondents. Thus, real correlations are assumed based on the associations of two events that are available in the mind of the people. Nisbett & Ross (1980) concluded that people overestimate correlations when they have some theoretical evidence for it and underestimate correlations when they possess no theoretical evidence.

Both biases occur, because people being affected by the availability heuristic draw conclusions based on what comes to their mind. In other terms, one might speak of a biased sample. The information set used is most likely to be biased towards certain information, be-cause the information that is remembered or used depends on different aspects, like (a) the ability to imagine instances – the possibility to logically generate or think of examples if not already available, (b) the effectiveness of search set – the ease of filtering examples from information, (c) the extremity of information (Tversky & Kahneman, 1974), (d) examples that support a decision (Golder & Tellis, 1993; Simon & Houghton, 1997) or (e) the up-to-dateness of information (Barber & Odean, 2008).

As already stated earlier, one might suggest that the availability heuristic influences decisions negatively. However, looking from a cost-benefit perspective, the availability heu-ristic can indeed be helpful, especially for entrepreneurs. For example, using heuheu-ristics can reduce the complexity of making judgments (Tversky & Kahneman, 1974; Pitz & Sachs, 1984), which is especially of importance for entrepreneurs, whose environment is viewed as being complex (Covin & Slevin, 1991; Gartner, Bird, & Starr, 1992). Furthermore, entrepre-neurs are known for not having the resources for a large-scale data collection (Busenitz & Barney, 1997), thus they are more dependent on information they have in mind to make deci-sions. Finally, as entrepreneurs are dependent on opportunities, they have to make timely decisions before an opportunity is forgone (Stevenson & Gumpert, 1985). From these find-ings it results that entrepreneurs seem to rely on the availability heuristics to a certain extent, due to the nature of entrepreneurship and the entrepreneurial environment – they rely on a biased sample to base their idea for a new venture on. Assuming that entrepreneurs have an idea about a new business in mind, thus based on available knowledge, they tend to filter

(8)

in-formation that reconfirm the idea, thus that reconfirm e.g. starting a new business. This ten-dency to reconfirm available information was proven in several researches, as was partly conceptualized as anchoring and adjustment by Tversky & Kahneman (1974) and is also known as confirmatory bias. People who base their initial idea on weak evidence have diffi-culties to interpret and include information that contradicts the idea (Bruner & Potter, 1964), they misinterpret contradicting information as supporting information (Rabin & Schrag, 1997), or it even leads to polarization, meaning that beliefs can move further away from reali-ty (Lord, Ross, & Lepper, 1979; Darley & Gross, 1983). Lord, Ross & Lepper (1979) empha-sized that when ambiguity is high, which entrepreneurs usually face, people overestimate the strength of confirming information and underestimate the relevance of disconfirming infor-mation. Thus, people that have information in mind tend to form correlations in their head and tend to filter and interpret new information in their environment in order to support those assumed correlations. I call it perceived evidence. Additionally, they become overly confi-dent about their idea. It follows that perceived evidence is predominantly supporting an en-trepreneur's idea, even when real evidence would suggest the digestion. This does not mean that perceived evidence and real evidence always differ, but at least it is to assume that it is more likely to occur as when all relevant information is considered. It follows that when per-ceived evidence differs from real evidence entrepreneurs are more likely to step into risky situations. Therefore, I hypothesize that:

Hypothesis 1: The higher the score on overconfidence and illusory correlation resulting from availability heuristic, the higher the risk taking propensity.

The representativeness heuristic occurs when people base their decision not directly on information of the object of interest, but on an object – the heuristic – that they believe is to a certain extent representative of the object of interest (Tversky & Kahneman, 1974). This implies that the accuracy of the decision is dependent on the degree to which the heuristic is similar to the object of interest. In an experimental test, Tversky & Kahneman (1974) asked people to guess the occupation of a fictive person called Steve by giving a short description about his personality. Respondents judged Steve to be i.e. a librarian based on the extent to which the description was similar to the stereotype of a librarian. Besides that, the representativeness heuristic can also influence correlational or causal beliefs (i.e. Jennings, Amabile, & Ross, 1982; Nisbett & Ross, 1980). Thus, an outcome is representative if features of the actor have a propensity to produce such an outcome (Tversky & Kahneman, 1983). For

(9)

example, if someone believes in a correlation between good grades and intelligence, one believes that everyone having good grades is intelligent, since the feature of the actor (intelligence) has a propensity to produce this outcome (good grades). In general, people tend to judge event A more likely to occur than event B, if event A seems to be more representative (Kahneman & Tversky, 1974).

In the paper at hand, I focus on four biases that result from decisions affected by the representativeness heuristics: base-rate fallacy, sample size fallacy, regression fallacy and illusion of control.

Base-rate fallacy occurs because people use irrelevant information rather than statistical, more relevant information to make judgments. For example, Tversky & Kahneman (1974) asked respondents to estimate the likelihood that a certain person belongs to a certain occupational group. The respondents were provided a description of the person – the description however only included irrelevant information to draw conclusions about the occupation. Furthermore, they provided evidence about the distribution of a population of which the person is part of: i.e. 70% of the population was engineers and 30% lawyers. Based on this information, respondents estimated the belonging to an occupational group based on the extent to which the description of the person resembles the stereotype of either an engineer or a lawyer, rather than basing their decision on the more valid indicative power of the distribution of occupations in the population (the base-rate). Through further alike tests they concluded that people neglect the more valid statistical information when irrelevant information is given; however respondents consider statistical information when no further evidence is given (Tversky & Kahneman, 1974). The proposition that people tend to neglect the base-rate frequency was relaxed by Juslin, Nilsson, & Winman (2009) by reviewing research on base-rate neglect. The authors' conclusion suggests that base-rates were not completely neglected but still influenced judgment. Nevertheless, in most cases, respondents tended to underweight the role of base-rates, depending on how informative the specific evidence and how extreme the base-rates are (Juslin et al., 2009). Earlier research suggested that people prefer using more vivid, salient and concrete information instead of remote, pallid and abstract information (Nisbett, Borgida, & Crandall, 1976), that causal powerful information dominate the decision making process (Tversky & Kahneman, 1980) or that the more perceived relevance of information is decisive (Bar-Hillel, 1980). In essence, people affected by the base-rate fallacy do not use or underweight the importance of base-rates: they instead use information to make judgments that are possibly less relevant.

(10)

inferential conclusions about a population without considering the sample size. For example, Tversky & Kahneman (1974) found that even experienced research psychologist wrongly assumed that inferences on parameters are independent of sample sizes. Thus, they overestimated the predictive power of statistics in the belief that these statistics are unconditionally representative of parameters. More precisely, people tend to reflect the probability distribution of a small sample on a whole population (Tversky & Kahneman, 1971). In a test, Tversky & Kahneman (1971) told respondents that a certain pattern, which is theoretically grounded, was found in a statistical analysis. The respondents, being asked to estimate the likelihood that the same pattern could be observed again in subsequent tests, naively overestimated the likelihood that the same pattern would emerge with comparatively small samples. Rabin (2002) supported the basic tenure of Tversky & Kahneman (1971; 1974) and showed that people overestimate (a) the representativeness of samples with different extreme outcome distributions and (b) the representativeness of small samples. Furthermore, as argued above, the information sample on which decisions are made, which are assumed to representative for parameters, are likely to be biased towards supporting information (Lord, Ross, & Lepper, 1979; Darley & Gross, 1983; Rabin & Schrag, 1997;Bruner & Potter, 1964).

All in all, both biases result in people considering information to be representative for parameters. However, both types of information do not necessarily possess indicative power. For base-rate fallacy more irrelevant information are likely to be used and for sample size fallacy the representativeness of the size and biased sample are not considered. Following a similar reasoning as for the availability heuristic, it can be assumed that people rather use information that support their ideas than considering base-rates or the impact of small and biased samples. Thus, both biases lead to the fact that information is used that is biased towards positive information without necessarily providing statistically valid indications. Therefore, it is assumable that people perceive less risk and thus have a higher risk taking propensity. Consequently, it is hypothesized:

Hypothesis 2a: The higher the score on base-rate fallacy and sample size fallacy resulting from representativeness heuristic, the higher the risk taking propensity.

In case of the regression fallacy, also known as the Galton's fallacy, people tend to neglect the statistical principle of regression to the mean. The regression fallacy suggests that people do not consider the natural fluctuations of variables around a certain mean. This implies that if people take action before an extreme observation, people tend to overestimate

(11)

the causal relation between the action and the extreme outcome, being surprised when the same action leads to a somewhat less extreme outcome at another time, just because variables fluctuate around a certain mean. Kahneman & Tversky (1974) use the example of flight training. After an exceptionally good performance of a flight trainee, the teacher praises the trainee. In a next training session, the teacher would expect the scholar to perform equally well. However, according to the regression to the mean principle, the scholar is likely to perform somewhat poorer. Being affected by the regression fallacy, the teacher might conclude that since he has praised the scholar in the previous session for his performance, he is now underperforming, assuming a false correlation between praising the scholar in the previous session and the somewhat poorer performance in the current session. Consequently, people that misunderstand or neglect the regression to the mean concept are prone to the regression fallacy in the way that they think that an extreme observed outcome is representative for actions taken before the outcome was observed. Regression to the mean and its fallacy has been substantially researched, mainly in the field of statistics, all suggesting that seriously wrong conclusions can be drawn when this concept is not understood or neglected (i.e. Marsh & Hau, 2010; Quah, 1993; Morrison, 1973)

The illusion of control bias suffers from the error of neglecting the role of chance. With the title „Heads I Win, Tails It's Chance“, Langer & Roth (1975) grasp the basic meaning of that bias. They summarized that research has agreed that people tend to attribute successes to their skills and failures to pure chance (i.e. Weiner, Frieze, Kukla, Reed, Rest, & Rosenbaum, 1971; Wortman, Costanzo, & Witt, 1973). Consequently, when a pure chance situation resulted into a success, people tended to believe that it was due to their skills, even though it was pure luck that turned a situation into success. Therefore, in a subsequent similar situation, people think they can control the situation believing that due to their skills they can stimulate success, even though skills are actually less relevant than chance. Langer (1975) stated that the illusion of control bias is more evident when it is unclear whether a situation is dependent on chance or skills. Consequently the illusion of control bias refers the confidence of people to control situations even when they are not controllable. March & Shapira (1987) explain that the reason for managers to choose sometimes risky situations is because they believe that they can control situations, even when they cannot.

Both biases emanating from the representativeness heuristic imply that people tend to overestimate the controllability of events /outcomes. Either they think that chance situations can be controlled or that extreme outcomes can be influenced by actions. In any case, skills are assumed to be causing certain outcomes. Especially entrepreneurs are prone to these

(12)

biases. Shaver & Scott (1991) noted that entrepreneurs are known for the fact that they believe to be able to control their environment. Consequently, as entrepreneurs are prone to believe in the controllability of events, they will perceive less risk and thus are willing to take the next step. Even though it is not assumable that it is always the case, it is at least assumable that entrepreneurs being influenced by those biases are more likely to underestimate the risk involved, leading to the fact that they are not deterred by the real risk involved. Consequently I hypothesize:

Hypothesis 2b: The higher the score on regression fallacy and illusion of control resulting from representativeness heuristic, the higher the risk taking propensity.

Table 1: Overview of Cognitive Biases

Availability Heuristic

Overconfidence

The belief that one’s judgments or predictions are overly accurate and right; being overly confident about the accu-racy of predictions

Illusory Correlation The belief in correlations between two variables that are wrongly associated in mind

Representativeness Heuristic

Base-rate Fallacy The neglect of base-rates in making judgments and reli-ance on irrelevant information instead

Sample Size Fallacy The neglect that different sample sizes have different inferential validity and are possibly biased

Regression Fallacy The neglect of natural fluctuations around a mean, lead-ing to wrong associations

Illusion of Control

The inability to distinguish between chance situations and control situations and the assumption that a chance situa-tion is controllable

Thinking Systems

Generally, academic literature agrees on the fact that there are two different cognitive thinking styles. Different names have been given to these two thinking systems, whereas I believe that intuitive (Jung, 1964) and analytical-rational (Epstein, 983) thinking styles provide the most associative description for the ways people think or judge.

(13)

The intuitive system operates in an automatic, holistic, associational manner (Denes-Raj & Epstein, 1994) and is fast, effortless, implicit and emotionally charged (Kahneman, 2003). On the other side, the analytical-rational system is defined as analytical, conscious, rule governed (Denes-Raj & Epstein, 1994) and is slow, limited, effortful and controlled (Kahneman, 2003).

Originally, the dual process theory provided a clear distinction between these thinking styles, which also assumes that both systems operate independently, simultaneously and interactively (Kahneman, 2003). Epstein and colleagues introduced the cognitive-experiential self-theory (CEST) (i.e. Epstein, 1990; 1994; Epstein, Pacini, Denes-Raj, & Heier, 1996), defining two thinking systems that broadly contrast the thinking systems of the dual-process theory (Witteman, van den Bercken, Claes, & Godoy, 2009). CEST agrees with the dual process theory on the fact that both cognitive thinking styles are used interactively; however it further explicitly suggests that people differ in the way they either respond primarily rationally or intuitively to situations (Langan-Fox & Shirley, 2003). Thus, CEST is in line with the famous work of Briggs & Myers (1976) or the work of Cacioppo & Pretty (1982) in claiming that there are individual differences in the degree to which people operate in the intuitive or rational thinking mode. According to that, some authors define thinking styles as a preferred manner of using either intuitive thinking or rational thinking, though which can vary depending on given situations (Sternberg & Grigorenko, 1997; Dane & Pratt, 2007). The basic tenure of entrepreneurial literature suggests that entrepreneurs are primarily intuitive thinkers (Aquino, 2005; Blume & Covin, 2005; Simon, Houghton & Aquino, 2000; Runco, 2004). Some of the most famous entrepreneurs attribute their success to intuition. For example, Oprah once said, “My business skills have come from being guided by my intuition”, Bill Gates stated, “you cannot ignore your intuition”, or Donald Trump wrote, “I’ve built a multi-billion empire by using my intuition” (see La Pira, 2011).

The intuitive thinking system is commonly described as the experiential system (Epstein, 1994). This means that instances are made available gained from prior experiences (Sloman, 2002). In line with the definition of Kahneman (2003), it is assumable that people tend to rely on these experiences that are already available in their head, as they are easy to access and therefore drive the reliance on heuristics. Thus, people operating preferably in the intuitive thinking system make use of experiential information that is already available and hence tend to use that information as a heuristic for the object of interest. This has major implications for the overconfidence and illusory correlation bias. These two biases, as already mentioned, occur, because people base their judgment on possibly biased and limited

(14)

information. Therefore, if entrepreneurs operating in the intuitive thinking system mainly base their decisions on experiences, they are more likely to being affected by those biases. More precisely, as Russo & Shoemaker (1992) proposed, the overconfidence bias results when people are only able to consider limited information in complex situations. This is exactly what people operating in the intuitive thinking system do. They tend to rely on experiential information (Epstein, 1994), that is easily accessible in their mind (limited information) (Kahneman, 2003). In turn, relying on limited information makes people overconfident about their conclusions (Russo & Shoemaker, 1992). For illusory correlation, a similar argumentation holds. The illusory correlation bias occurs, when people can easily form correlations in their heads. Thus, as in the intuitive thinking system, judgments are likely to be made based on easy to access information and correlations are likely to be assumed that are easy to compute in the head (i.e. based on experiences) (Kahneman & Tversky, 1974). Furthermore, as people in the intuitive thinking mode tend to make judgments holistically (Epstein, 1994) or based on broad generalizations (Epstein, 1991), they see things as a whole and not in a detailed way. For illusory correlation this means, that when people assumed co-occurrences in their head, they are less likely to question whether real correlations exist, as they do not look into detail. Thus, if correlations in the first place make sense in the head as two events have been observed to co-occur several times, they tend to be accepted in the holistic system.

Relying on the experiential information per se would not be a problem if the easy to access experiences would be a representative sample of information. However, it is assumable that these experiences are not representing a population, as experiences that are recalled and used as a basis for judgment in the intuitive thinking system are indeed likely to be biased that was described earlier (see Tversky & Kahneman, 1974; Golder & Tellis, 1993; Simon & Houghton, 1997; Barber & Odean, 2008). Therefore, I hypothesize that:

Hypothesis 3a: The higher an entrepreneurs score on intuitive thinking, the higher his/her score on overconfidence and illusory correlation resulting from the availability heuristic.

Furthermore, the intuitive thinking system being a holistic one (Epstein, 1994) based on broad generalizations (Epstein, 1991) and thus intuitive people not being likely to look that much into detail results in situations or examples tending to look more alike; they are more likely to be assumed to be representative. This of course means that the biases resulting from representativeness heuristic are more likely to occur (see Kahneman, 2003). For

(15)

base-rate fallacy as well as sample size fallacy, people are more likely to assume that irrelevant or biased information they use is indeed representative for the decision-situation, since they do not question things as they do not look into detail. Furthermore, the neglect of base-rates or sample size implications is indeed likely to occur in the intuitive thinking system, which is a concretive one (Epstein, 1994). This suggests that any abstract information or the statistical computation of abstract information is avoided, as it occurs when people are affected by the base-rate fallacy and sample size fallacy. Epstein (1994) suggested that people tend for example to use more concrete information, like absolute numbers, rather than more abstract information, like ratios. This is in line with the attribution of “natural way of thinking” to the intuitive system, described as the “customary way in which a particular kind of situation tends to be interpreted […]” (Epstein, 1994). Thus, people in the intuitive mode do not tend to interpret situations as statistical problems. Therefore, I hypothesize that:

Hypothesis 3b: The higher an entrepreneurs score on intuitive thinking, the higher his/her score on base-rate fallacy and sample size fallacy, resulting from the representativeness heuristic.

For illusion of control, the holistic as well as the broadly generalizing system (Epstein, 1991; 1994) implies that entrepreneurs are more likely to believe that chance situations are in fact controllable situations. By not looking much into detail, they are more likely to think that control situations they already experienced are similar to situations they are facing, even though some differences make it a chance situation. As already mentioned, especially when chance situations are very similar to control situations, people are likely to assume controllability (Langer, 1975). Similar for the regression fallacy, people are more likely to believe that an action is representative for an outcome and thus believe in their controllability, when operating in the holistic system (Epstein, 1994) based on broad generalizations (Epstein, 1991). Furthermore, people assume an outcome to be representative of an action, since they fail to consider the regression to the mean principle, a statistical logic. As people in the intuitive thinking system prefer concretive information to abstract information, they tend to neglect the statistical logic of regression to the mean, which can be described as an abstract computation of information. Indeed the intuitive thinking, which is oriented towards immediate action (Epstein, 1991), does not allow for the computations of statistical logics, as virtually no time to make the judgment is given. Therefore, intuition is likely to result in people being affected by regression fallacy.

(16)

In general entrepreneurs are more likely to believe in the controllability than in the uncontrollability of things (Shaver & Scott 1991) as well as they are considered to be intuitive thinkers (Aquino, 2005; Blume & Covin, 2005; Simon, Houghton & Aquino, 2000; Runco, 2004), indicating a co-occurrence between intuitive thinking and the believe to control things. Consequently I hypothsize that:

Hypothesis 3c: The higher an entrepreneurs score on intuitive thinking, the higher his/her score on regression fallacy and illusion of control resulting from the representativeness heuristic.

The opposite argumentation holds for why rational thinking decreases the use of heuristics and thus biases. People who tend to operate preferably in the rational thinking mode are less likely to rely on heuristics and therefore are less likely being affected by biases. Rational thinking is often equalized with satisfying Bayesian rules (Sheffrin, 1996; Wittmann, 1995; Glimcher, Dorris, & Bayer, 2005; Naqvi, Shiv, & Behcara, 2006). This implies that rational thinking is a logical (Jones, 1955) way to reason things based on statistical evidence. In general all cognitive biases are likely to occur, because people neglect or underestimate some kind of statistical evidence. However, rational thinking is rule-governed (Denes-Raj & Epstein, 1994), thus the rational thinking system does not allow for using heuristics and thus reduces the likelihood of cognitive biases to occur. Furthermore, the rational thinking system requires justification via logic and is assumed to allow for considering statistical evidence (Epstein, 1994). Conversely, all cognitive biases occur because people either neglect the role of chance (illusion of control), the role of regression to the mean principle (regression fallacy), base-rates (base-rate fallacy) sample size implications (sample size fallacy), or the statistical validity of available information (illusory correlation and overconfidence). Thus, people in the rational mode tend to base their decisions more on statistics, as this system requires justification via logic and evidence (Epstein, 1994) and thus cognitive biases are less likely to occur as statistical information is less likely to be neglected. Consequently I hypothesize that:

Hypothesis 4: The higher an entrepreneurs score on rational thinking, the lower his/her score on (a) illusory correlation and overconfidence resulting from the availability and (b) the lower the score on (b) base-rate fallacy, sample size fallacy, illusion of control and regression fallacy resulting from the representativeness heuristic.

(17)

Figure one illustrates the conceptual model.

Figure 1: Conceptual Model

Control Variables

The model, more precisely the cognitive biases, is controlled for life satisfaction, experience and educational background. Being satisfied with life is linked to being more rational and deliberate and thus being supposed to rely less on heuristics and being affected by cognitive biases (Coutinho & Woolery, 2004; Dwyer, 2008). Furthermore, being more experienced might lead to reliance on heuristics followed by cognitive biases as more information are available in the mind of people they can use as heuristics (Kahneman, 2003; Kahneman & Tversky, 1974). Finally, business people are traditionally seen as right brain thinkers (Horton, 1995), meaning that those people are more intuitive thinkers, thus more prone to rely on heuristics and being affected by cognitive biases (Hanna, Wagle, & Kizilbash, 1999).

+

+

+

+

+

+

-

-

-

-

-

+

-

+

+

+

+

+

Risk Taking Propensity Availability Heuristic: Overconfidence Illusory Correlation Representativeness Heuristic: Base-rate Fallacy Sample Size Fallacy

Illusion of Control Regression Fallacy Rational Thinking

(18)

Methodology

Sample and data collection

1500 randomly selected entrepreneurs completed the questionnaire. The survey was administered allowing the total design method for survey research (Dillmann, 1978). A first mailing, including a personalized letter, a project fact sheet and the survey with a priority postage-paid envelope and an individually typed return-address label was sent to the 1,500 randomly selected entrepreneurs, of which 324 mailings returned due to undeliverable addresses and names. Accordingly, the adjusted sample was 1,176 entrepreneurs. A second, follow-up letter was sent one week later to increase the response rate. After two weeks, another package with the same content as the first mailing was sent. In total, 289 entrepreneurs replied and completed the questionnaire, representing a response rate of 24.6%. Initially, three different sources were used to identify entrepreneurs: (1) A list of entrepreneurs assembled by the venture capitalist, David Silver (Silver, 1985); (2) a list of national winners of the Entrepreneurs of the Year awards, published by Ernest and Young; (3) a list of 6,359 entrepreneurs’ venture-backed firms published by VentureOne.

An extensive pre-test was conducted with twelve entrepreneurs. In the beginning, the entrepreneurs introduced themselves and their backgrounds. When filling in the questionnaire, the respondents were told to think aloud. Two researchers carefully noted the recorded verbalization of the thinking process of the entrepreneurs. Afterwards, some wordings of the cases and instructions had to be changed.

Measurement

Scales existent in literature were used whenever possible. If scales were not existent, new scales were derived from definitions. The measurements were additionally filled with self-developed entrepreneurial cases, derived from existing cases. Due to the length of the questionnaire, only those items with the highest loading were used.

Dependent variable: Risk taking propensity was measured by the certainty equivalent approach, aiming to plot an individual on the utility curve (Kahneman & Tversky, 1979; Mullins, Forlani, & Walker, 1999; Schneider & Lopes, 1986). The entrepreneurs received scenarios with the possibility to choose a certain and one risky option with the same expected value. Furthermore, two more questions were added, where the entrepreneur could choose between 10 different options with the same expected value. The options differed in the way riskiness was manipulated by the percent of the total sum that entrepreneurs can win or lose

(19)

(e.g. Walls & Dyer, 1996). The Cronbach's α for risk taking propensity is 0.87.

Mediating variables: Illusory correlation was measured by using the ideas of Tversky & Kahneman (1974). Common myths about co-occurrence were provided to entrepreneurs. Entrepreneurs had to indicate to what extent they agree or disagree. The Cronbach’s α for illusory correlation is 0.67.

For overconfidence a 3-item scale was developed as suggested by Forbes (2005) and Brenner, Koehler, Liberman, & Tversky (1996). The respondents were provided difficult general knowledge questions. For each given answer, they had to indicate to what extent, measured in percentage, they were confident about their answer. The Cronbach’s α for overconfidence is 0.82.

Base-rate fallacy was measured using two cases based on Lynch & Ofir (1989). Statistically relevant information (base-rates) and non-statistically irrelevant information were provided. At the end of the description, respondents were requested to judge the likelihood in percentage of an option, which answer they could best derive from the statistically relevant information. If respondents’ answers deviated from the base-rates given, they suffered from base-rate fallacy. The Cronbach’s α for base-rate fallacy is 0.73.

Illusion of control was measured by using 5-item scales based on Simon et al. (2000) and Zuckerman, Knee, Kieffer, Rawsthorne, & Bruce (1996). The questions were about the accuracy of predictions an entrepreneur believes to have in controlling things. The Cronbach’s α for illusion of control is 0.88.

Regression fallacy was measured by a case based on an example of Tversky & Kahneman (1974); it is a one-item measurement. The case describes a company with different growth developments that illustrated numbers that vary naturally around a certain mean. During these developments an advertisement was made. If entrepreneurs derived conclusions about the advertisements being responsible for the growth development of the company, they were affected by the regression fallacy, as according to the regression to the mean principle, fluctuations were natural and not influenced by the advertisement campaign.

Sample size fallacy was measured by a 3-item scale based on Simon et al. (2000) and Mohan-Neill (1995). It was concerned about in how far entrepreneurs base their decisions on a limited amount of sources and potentially biased sources (i.e. only friends). The Cronbach’s α for sample size fallacy is 0.85.

Independent variables: The rational thinking system was measured by the so-called "Need for cognition" scale (Epstein et al., 1996; Pacini & Epstein, 1999). By doing so, the extent to which entrepreneurs rely on in-depths, hard and logical thinking was captured. The

(20)

Cronbach’s α for rational thinking system is 0.90.

Intuitive thinking was based on the latest version of "Need for cognition" and "Faith in intuition" instrument by Pacini & Epstein (1999). It measured to what extent entrepreneurs rely on gut feelings and believes. The Cronbach’s α for intuitive thinking system is 0.98.

Control variables: General satisfaction was measured by the "Satisfaction with Life Scale" based on a scale developed by Pavot & Diener (1993). Questions were asked about the extent to which respondents are satisfied with their life or they would change something in their life. The Cronbach's α for general satisfaction is 0.76.

Educational background is a one-item measurement based on a nominal scale, where respondents were asked for their educational background: 1 is degree obtained in a business related field, 2 a degree in a technical related field and 3 a degree in any other field. Afterwards, technical and other educational backgrounds were converged and compared to business background.

Experience is also a one-item measurement based on an interval scale, asking respondents for how many years they have been an active entrepreneur.

Analysis

First order confirmatory factor analysis

A confirmatory factor analysis was conducted using Maximum Likelihood Estimation (MLE) with Lisrel 8.80 (Jöreskog & Sörbom, 2001). The MLE estimation procedure appeared to be appropriate since it is commonly accepted and the recommendation of a minimum sample size of 100-150 is met (Hair, Anderson, Tantham, & Black, 1998). Furthermore, the sampling adequacy of the data set was tested using the Kaiser-Meyer-Olkin test, in order to check for a sufficient amount of correlations (see Table 2). The test revealed a sampling adequacy of 0.85 with a significance level of the Bartlett’s Test of Sphericity of p<0.001, indicating a very good adequacy (Hair et al., 1998).

For the item basfa2, a negative error variance occurred (Heywood Case) (Kolenikov & Bollen, 2007). Therefore, the error variance was fixed to 0.

The factor analysis was conducted with all independent variables and mediating variables with a metric scale (Hair et al., 1998). One-item constructs, the dependent variable as well as the control variables were not included (Anderson & Gerbing, 1988).

(21)

= 0.90; CFI = 0.98; NFI = 0.95; NNFI = 0.97; RMSEA = 0.048 (see Table 3). All items load highly significant on the constructs (p<0.001) and all standardized solutions are higher than 0.5, indicating the scales' convergent validity (Fornell & Larcker, 1981). Besides, all inter-factor correlations' confidence intervals do not include the value "one" (p<0.01) and all item-level correlations between constructs are insignificant, indicating discriminant validity (Bagozzi & Philips, 1991). Table 2 shows the Cronbach’s alphas range from 0.67-0.98, suggesting good reliabilities (Nunnally, 1978).

(22)

Table 2: Descriptive Statistics – Means, Standard Deviations, Correlations and Reliabilities

Construct Mean St. Dev. 1 2 3 4 5 6 7 8 9 10 11 12

1. Risk-taking Propensity 4.45 2.35 0.87 2. Overconfidence 5.80 2.42 0.35** 0.82 3. Illusory Correlation 4.65 1.25 0.34** 0.17** 0.67 4. Base-rate Fallacy 2.62 0.99 0.22** 0.10 -0.01 0.73 5. Sample Size Fallacy 4.71 1.37 0.35** 0.35** 0.24** -0.08 0.85 6. Regression Fallacy 5.62 2.49 0.40** 0.41** 0.06 -0.04 0.30** 7. Illusion of Control 5.17 1.25 0.50** 0.23** 0.28** 0.30** 0.17** 0.24** 0.88 8. Rational Thinking 3.14 1.17 -0.35** -0.36** -0.11 -0.05 -0.20** -0.22** -0.39** 0.90 9. Intuitive Thinking 4.46 1.39 0.31** 0.32** 0.30** 0.06 0.38** 0.17** 0.29** -0.30** 0.98 10. Educational Background 1.40 0.49 -0.26** -0.32** -0.12* -0.07 -0.14* -0.12* -0.37** 0.70** -0.44** 11. Experience 14.50 8.76 0.01 0.13* 0.70 -0.11 0.08 0.05 0.11 -0.12* 0.14* -0.14* 12. General Satisfaction 3.45 1.43 -0.01 0.10 0.01 0.01 0.19** -0.12* -0.11 -0.05 0.26** -0.12* 0.11 0.76

Figures on the diagonal line represent Cronbach's α *p<0.05; **p<0.01

(23)

Table 3: Factor Loadings (First-Order Confirmatory Factor Analysis)

Construct Item Factor Loading T-Value

Overconfidence over1 0.84**** 16.02 over2 0.84**** 15.86 over3 0.67**** 12.05 Illusory Correlation cor1 0.63**** 9.73 cor2 0.54**** 8.33 cor3 0.77**** 11.65 Illusion of Control ic1 0.79**** 15.46 ic2 0.87**** 17.91 ic3 0.65**** 11.90 ic4 0.74**** 14.07 ic5 0.82**** 16.49 Base-rate Fallacy base1 0.58**** 10.68 base2 1.00 - Sample Size Fallacy

sn1 0.86**** 16.95 sn2r 0.72**** 13.25 sn3r 0.84**** 16.30 Intuitive Thinking es1 1.00**** 24.00 es2 0.80**** 16.40 es3 0.99**** 23.50 es4 0.99**** 23.55 es5 0.99**** 23.48 Rational Thinking rs1r 0.77*** 15.37 rs2 0.67*** 12.62 rs3 0.95*** 21.37 rs4r 0.97*** 22.28

Significance levels are based on unstandardized coefficients *p<0.05; **p<0.025; ***p<0.01; ****p<0.001

r - the item is reversed

χ2

= 421.51; χ2/df = 1.66; GFI = 0.90; CFI = 0.98; NFI = 0.95; NNFI = 0.97; RMSEA = 0.048 Please see appendix for the meaning of each variable

(24)

Second order factor analysis

To check for the conceptual distinction of the different biases, meaning that the biases illusory correlation and overconfidence emanate from the availability heuristic and that base-rate fallacy, illusion of control, sample size fallacy and regression fallacy emanate from the representativeness heuristic, I conducted a second order factor analysis.

The overall model shows a good fit (Hair et al., 1998): χ2 = 268.52; χ 2/df = 2.36; GFI = 0.90; CFI = 0.94; NFI = 0.90; NNFI = 0.92; RMSEA = 0.069.

All constructs (except for base-rate fallacy) revealed significant loadings on the assigned second order factors. The completely standardized loadings of the biases on the representativeness heuristic are: base-rate fallacy (β = 0.12; insignificant); illusion of control (β = 0.43; p<0.001): sample size fallacy (β = 0.55; p<0.001); regression fallacy (β = 0.53; p<0.001). For the availability heuristic: illusory correlation (β = 0.37; p<0.001); overconfidence (β = 0.75; p<0.001).

For three of the constructs, namely for base-rate fallacy, illusion of control and illusory correlation, the standardized factor loadings (not reported here) are lower than 0.5, questioning the convergent validity (Fornell & Larcker, 1981). Therefore, the conceptual distinction of the biases is not doubtlessly evidenced. (See also Appendix A)

.

Overall Model Fit

The overall model provides a good fit with the theoretically developed hypothesis. However, according to the modification index provided by Lisrel 8.80, a significant relationship between rational thinking and risk taking propensity had to be added. So, I checked the original model with an additional relationship from rational thinking to risk taking propensity. This model shows a significantly better fit than the original model (∆χ2 = 5.33; ∆df = 1; p<0.025). Thus, the goodness of fit statistics of this model are as follows: χ2 = 1101.79; χ2/df = 2.14; GFI = 0.82; CFI = 0.95; NFI = 0.91; NNFI = 0.94; RMSEA = 0.063. For completeness, I also checked a model with an additional link between intuitive thinking and risk taking propensity. Compared to the original model, the model with an extra link from intuitive thinking system to risk taking propensity does not improve significantly (∆χ2 = 0.17; ∆df = 1; not significant). Consequently, I will check for mediation of the biases, whose results are presented later.

(25)

Results

The main hypotheses of the paper are that intuitive thinking is positively and rational thinking negatively associated with biases. Furthermore, I expect biases to lead to a higher risk taking propensity. The following sections report the results of the best model retrieved, thus the model with an extra path from rational thinking to risk taking propensity.

Biases and risk-taking propensity

Hypothesis 1 finds partial support. Illusory correlation shows a significant relationship to risk taking propensity (β = 0.21; p<0.001), whereas overconfidence is insignificant. Hypothesis 2a is strongly supported by the model. The relationship between base-rate fallacy and risk taking propensity (β = 0.22; p<0.001) as well as between sample size fallacy and risk taking propensity (β = 0.21; p<0.001) are both significant.

The same results are found for hypothesis 2b. The relation of illusion of control (β = 0.30; p<0.001) and regression fallacy (β = 0.28; p<0.001) to risk taking propensity are both highly significant.

Thus, all biases that emanate from the representativeness heuristic show a positive relationship with risk taking propensity, supporting Hypothesis 2a and 2b. The biases emanating from availability heuristic do not completely support hypothesis 1.

Thinking systems and biases

Hypothesis 3a is strongly supported. As expected, the intuitive thinking system is positively related to the reliance on biases emanating from the availability heuristic. The relationship with overconfidence ((β = 0.23; p<0.001) and with illusory correlation ((β = 0.42; p<0.001) are both significant, whereas the illusory correlation bias exhibit the strongest association of all biases.

Hypothesis 3b is concerned with the relationship between intuitive thinking and base-rate fallacy and sample size fallacy. The hypothesis finds partial support; whereas the path between intuitive thinking system and base-rate fallacy is insignificant, the path to sample size fallacy (β = 0.38; p<0.001) is highly significant.

Hypothesis 3c is fully supported, the path to both biases is highly significant: regression fallacy (β = 0.19; p<0.01) and illusion of control (β = 0.22; p<0.001).

Hypothesis 4, the relationship between rational thinking and biases finds weak support. Only the relationship between rational thinking and overconfidence (β = -0.24;

(26)

p<0.01), regression fallacy (β = -0.17; p<0.025) and illusion of control (β = -0.17; p<0.025) are found to be significant, whereas the relationship with illusory correlation, base-rate fallacy and sample size fallacy are found to be insignificant.

Mediating role of biases

As already mentioned, the model with the additional path from rational thinking to risk taking propensity shows a significantly better fit compared to the original model. The additional path (rational thinking to risk taking propensity) turns out to be significantly negative (β = -0.12; p<0.025). When I leave out the paths from rational thinking to biases and from biases to risk taking propensity, the model becomes significantly worse (∆χ2 = 176.84; ∆df = 11; p<0.001). Consequently, I conclude that it is possible, if theoretically supported, that the biases partially mediate the relationship between rational thinking system and risk taking propensity.

Furthermore, the model with an extra path from intuitive thinking to risk taking propensity does not show a better model fit, since the path is insignificant. When I compare a model without the path from intuitive thinking to biases and from biases to risk taking propensity and with a direct link from intuitive thinking to risk taking propensity, the model becomes significantly worse (∆χ2 = 238.3; ∆df = 11; p<0.001), whereas the relationship between intuitive thinking and risk taking propensity becomes significant (β = 0.27; p<0.001). Consequently, the link from intuitive thinking system to risk taking propensity over the biases appears to be more dominant than the direct link from intuitive thinking to risk taking propensity. Thus, the results indicate that the biases might act as a full mediator between intuitive thinking system and risk taking propensity, if theoretically supported.

Control variables

The control variables turn out not to contribute substantially to the overall model. Of all 18 relations of the three control variables and the six biases, only four paths are significant. Two of them are the relationship of general life satisfaction and regression fallacy (β = -0.21; p<0.01) and illusion of control (β = -0.26; p<0.001). The other significant path is the negative relation of experience and base-rate fallacy (β = -0.15; p<0.025). The last significant relationship is between educational background and illusion of control (β = -0.21; p<0.01), meaning that people with a business education are more prone to be affected by the illusion of control bias.

(27)

Table 4: Results (standardized solution) Independent variables: Risk Taking Propensity Over-confidence Illusory Correlation Base-rate Fallacy Sample Size Fallacy Regression Fallacy Illusion of Control Overconfidence 0.02 Illusory Correlation 0.21**** Base-rate Fallacy 0.22****

Sample Size Fallacy 0.21****

Regression Fallacy 0.28****

Illusion of Control 0.30****

Intuitive Thinking 0.23**** 0.42**** 0.07 0.38**** 0.19*** 0.22**** Rational Thinking -0.12** -0.24**** 0.00 0.05 -0.12 -0.17** -0.17** General Life Satisfaction 0.01 -0.11 -0.03 0.08 -0.21*** -0.26**** Experience 0.07 0.03 -0.15** 0.01 0.04 0.07 Educational Background -0.07 0.02 -0.12 0.12 0.05 -0.21*** Significance levels are based on unstandardized coefficients

*p<0.05; **p<0.025; ***p<0.01; ****p<0.001 χ2

(28)

Discussion and Theoretical Implications

In this study I examined what determines entrepreneurial risk taking propensity, a concept of which researchers have found controversy findings. First of all, I showed that people relying on heuristics and thus being affected by cognitive biases indeed tend to be more risk taking, either because they assume situations to be controllable or information being biased on which judgments are made.

Only overconfidence does not explain risk taking propensity. This is in line with the findings of Keh, Foo, & Lim (2002), who did not find a relationship between overconfidence and risk perception. The authors attributed this to the fact that overconfidence might not be valid across domains and thus entrepreneurship-specific questions have to be developed to measure overconfidence.

One step afore, the results indicate what determines the occurrence of biases and thus indirectly risk taking propensity. I showed that heuristics and cognitive biases are more likely to occur when entrepreneurs operate in the intuitive thinking system, thus people operating in the intuitive thinking system also tend to be more prone to risk taking. This conclusion does however not apply for base-rate fallacy, whose relationship to intuitive thinking is found to be insignificant. Even though being insignificant, the path between intuitive thinking indicated a positive relation (β = 0.7; insignificant). In the questionnaire, base rates were given in terms of ratios. Furthermore, one might suggest that since base rates were given, it was obvious that those base rates were relevant for the final answer. As Cosmides & Tooby (1996) indicate, base rate fallacy would be more obvious, when for example probabilities were given instead of frequencies, as frequencies are more easily observable. Besides, some authors suggest that when base rates are obviously linked to the outcome, base rate neglect is reduced (Bar-Hillel, 1980; Sloman, Over, Slovak, & Stibel, 2003) Therefore, I suggest that the occurrence of base rate neglect of intuitive thinkers was more extreme and thus the relationship significant, when (a) probabilities were given instead of ratios and (b) base rates were not as obviously linked to the outcome as it was in the questionnaire. Consequently, intuitive thinkers in the holistic and concretive system would be more likely to neglect base rates that are not obviously linked to the outcome as well as they would avoid the statistical computation of probabilities into base rates.

With regard to rational thinking, the model shows that entrepreneurs operating in the rational thinking system only reduce the occurrence of some cognitive biases. As rational thinking lowers the effect of overconfidence, regression fallacy and illusion of control,

(29)

rationality does not have an influence on the other three biases: illusory correlation, base-rate fallacy and sample size fallacy.

For base-rate fallacy the explanation for the insignificant path can be found in the study at hand. I propose that adjusting the estimated probability of the occurrence of a single case derived from base rates of a population can also be considered as rational or statistical logic. Rational thinkers proved in the study that they are less likely to being affected by regression fallacy and illusion of control bias. This indicates that people understood, that a single event case can either be somewhere around a certain mean (base-rates) and that chance plays a role. Therefore, the rational thinkers might have used base rates as initial starting points, but then slightly adjusted them in order to incorporate the role of chance and regression to the mean principle in their judgment. Consequently, the usage of base rates, the logic adjustment according to regression to the mean and the role of chance might counteract, thus explaining the insignificant relationship. I propose that future questions regarding base rate fallacy should include a confidence interval, especially in field research, in order to consider the role of chance and regression to the mean principle from the beginning on.

The insignificant path between rational thinking and illusory correlation can be explained by consulting the associative learning theory. Here, authors suggested that the more learning occurred, the more precise people were in judging the occurrence of correlations between events (Shanks & Dickinson, 1987; Vallée-Tourangeau, Hollingsworth, & Murphy, 1998). Thus, this indicates that illusory correlation has to be controlled for by learning and more precisely domain specific questions have to be formulated to measure illusory correlation. If someone has no knowledge in a specific domain, answers on questions measuring illusory correlation might turn into pure gambling. The questionnaire included statements from completely different fields, as pet nutrition, licensing or entrepreneurship. Thus, whereas entrepreneurs are likely to have learning in the domain of entrepreneurship, it is not assumable that all entrepreneurs also have learning in i.e. pet nutrition. Consequently, even rationally thinking entrepreneurs are then forced to speculate to judge the extent they agree or disagree with a statement. Whereas on statements from the field of entrepreneurship entrepreneurs would seem to be less affected by illusory correlation because they possess learning in this field, they might be more affected by illusory correlation in the field of i.e. pet nutrition, where they are less likely to have information or learning.

For sample size fallacy, it is difficult to find a conceptual explanation for the insignificant relationship. However, one might also consider that the path (β = -0.12) is negative at a significance level of α = 0.1.

Referenties

GERELATEERDE DOCUMENTEN

To measure the relation between formal and informal environmental management control systems (EMCS) and the score on the TB, and the moderating effect of the processing

ACM thus comes to the conclusion that the relevant wholesale market consists of (i) the national market for unbundled access (virtual or otherwise) to the copper and

Simulation experiment A set of procedures, including simulations, to be performed on a model or a group of models, in order to obtain a certain set of given numerical results..

Two flexure hinge types are optimized for high support stiffness and high first unwanted eigenfrequency for two different working ranges, ±5.7° and ±20°.. We show how multiple

From each category we have selected a quote that best illustrate the meaning of the category in order to provide a better understanding, this can be found in appendix 1.. The

Niet alleen van Theo, die zich pas tot een antiracistisch protest laat verleiden nadat hij daarvoor oneigenlijke motieven heeft gekregen en die in zijn erotische escapades met

Sandrine Llouquet’s life path illustrates the ambivalence of the Viet Kieu identity. Llouquet grew up with separated Vietnamese parents and was French-educated. Her

We further showed that background light scatter- ing is the dominant source of variation in B, as for all illumination powers the standard deviation of the background photon noise