• No results found

Examining the Accuracy of Personality Descriptions: a Bayesian Reanalysis and Replication Study

N/A
N/A
Protected

Academic year: 2021

Share "Examining the Accuracy of Personality Descriptions: a Bayesian Reanalysis and Replication Study"

Copied!
29
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Tim A. Draws

Department of Psychology, Psychological Methods, University of Amsterdam, The Netherlands

Correspondence concerning this article should be addressed to: drawstim@gmail.com.

Word count: 6322.

Abstract

Two earlier studies on the topic of the accuracy of astrological and psy-chological personality descriptions draw contrary conclusions. This article provides a Bayesian reanalysis of these two studies, as well as a preregistered, direct replication of one of them. The reanalysis confirms the conclusions drawn by both of the two studies, but a comparison of the Bayes factors indicates that psychological descriptions might be more accurate than astro-logical descriptions. In the replication study, astroastro-logical and psychoastro-logical personality descriptions were created for 29 participants. Each description was subsequently matched with a random other description and participants were instructed to select the description they believed to be about them. The results show that participants could select their personality description more reliably when it was based on a psychological personality test than when it was based on an astrological natal chart (BFr0 = 1884). This indicates that psychological personality tests form a more accurate basis for personality descriptions than astrological natal charts.

Keywords: NEO-FFI, Astrological Natal Chart, Bayesian statistics, Bayesian learning, Bayesian binomial test, Replication

Astrologers make a bold promise: knowing the exact hour, date, and location of your birth, they claim to be able to provide you with an accurate description of your personality.

(2)

Given this data, the astrologer draws a map of the sky from the perspective of the birth location at the time of birth. This map, also called astrological natal chart (ANC), enables the astrologer to interpret the positions of the planets in different zodiac signs and, so the claim goes, to derive the person’s personality from that interpretation (The Basics of

As-trology: What is a Natal Chart?, 2013). Failing to provide scientific evidence for the efficacy

of their method, astrologers are often confronted with skepticism by the scientific commu-nity (Newall, 2011). Especially psychologists are often critical of astrological methods, as they claim to have developed the scientific way of measuring personality. Psychological per-sonality tests are based on psychological theories that are, for the most part, grounded in scientific observations and evidence. Assessments such as the Myers-Briggs Type Indicator (MBTI), Minnesota Multiphasic Personality Inventory (MMPI), or the Revised NEO

Per-sonality Inventory (NEO-PI-I) are commonly used by professionals in different fields (e.g.,

psychological research, clinical psychology, or human resources). This indicates that these types of tests are a lot more accepted as a method of determining someone’s personality than ANCs. Rightfully so?

Carlson (1985) tested how accurately people can distinguish astrological and psycho-logical personality descriptions of themselves from descriptions of others. In his research, three different personality descriptions generated by an ANC were handed out to each par-ticipant. Participants were then asked to rank the descriptions according to how well they fit their true personality. One of the descriptions was the true ANC-based personality de-scription of the participant, whereas the other two belonged to other participants. Carlson also handed out three different personality descriptions generated by the California

Person-ality Inventory (CPI), a psychological personPerson-ality test. Again, only one of three descriptions

was the result of the CPI that the participant had filled out beforehand, whereas the other two belonged to other participants. Carlson showed the setup to psychologists as well as astrologers and made sure they agree to the conditions of his study. It was found that participants could select neither their true astrological nor their true psychological person-ality description as their first choice above chance level. Carlson concluded that there is no difference between ANCs and psychological personality tests in this respect, as they both seem not to be able to produce personality descriptions that people can reliably recognize themselves in. Carlson’s paper did not only offer evidence against claims that astrology can provide accurate personality descriptions, but also raised the question of whether psycho-logical personality tests actually work any better than ANCs in that respect.

This led Wyman and Vyse (2008) to perform a conceptual replication of Carlson’s study, in which they collected relevant astrological data from participants and let them take a NEO Five-Factor Inventory (NEO-FFI; Costa & McCrae, 1985), a psychological personality test that has good psychometric properties (McCrae & Costa, 2010). Each

(3)

par-ticipant was then given two personality descriptions generated by an ANC. One of the two descriptions was the result of the ANC for that participant, whereas the other one belonged to another participant. The task was for each participant to select their true description. The same procedure followed for the NEO-FFI. Wyman and Vyse found that participants could not select their true astrological result above chance level. On the other hand, par-ticipants were able to do that for the NEO-FFI descriptions. The researchers concluded that psychological personality tests produce more recognizable personality descriptions than ANCs.

In sum, the authors of the two existing papers on this topic draw contrary conclusions. Both studies found that people cannot select their true personality description as generated by an astrological natal chart, but there is no consensus on whether people can do the same for psychological personality tests, and whether there is a difference between the two assessments. This study therefore had two purposes. First, we aimed to reanalyze and check the results of the two studies described above in order to validate the conclusions made by the authors. Second, we aimed to perform a preregistered, direct replication of the Wyman and Vyse (2008) study. The goal of this to organize the existing evidence in order to come closer to a final conclusion on the difference between the efficacy of the astrological and the psychological way of describing a personality.

Issues with Earlier Research on this Topic

A problem with the two studies discussed above is that in both cases, the authors prematurely concluded that there was no, or that there was a difference between the efficacy of the psychological and the astrological method to describe someone’s personality. They based their conclusions on the individual results of the two personality measures, without actually testing for a difference between the two. A separate test would have to be conducted before such a conclusion can be made. It is not possible to conclude about the difference between the efficacy of two constructs without actually testing for that difference (Gelman & Stern, 2006).

Another issue lies in the statistical paradigm of frequentism, which the authors of both papers operated in. Suppose we wanted to know whether a certain binomial probability

π is equal to chance level. For instance, when giving participants a number of personality

descriptions, only one of which belongs to them, they can either make the right choice (usually coded as a ’1’ in the data) or the wrong choice (usually coded as a ’0’ in the data). Given 2 different options to choose from, chance level lies at 0.5. Therefore, 0.5 will serve as the test value π0. To analyze whether π is equal to π0, one may conduct a binomial

test. The null hypothesis in such a binomial test states that π is equal to π0, whereas the alternative hypothesis states that π is not equal to π0. The frequentist way of answering this

(4)

research question is to compute the probability of the obtained data and all more extreme cases under the null hypothesis. This can be done by using the probability mass function of the binomial distribution, which is given by

p(data|π0) =

n k !

π0k(1 − π0)n−k, (1)

where k is the number of successes in n trials. The sum of the resulting probabilities is called the p-value. If the p-value is lower than a specified significance threshold, the frequentist calls the result ’significant’. It is common practice in the social sciences to place the significance threshold (also called ’alpha-level’) at 0.05. Therefore, p-values lower than 0.05 are usually called ’significant’.1 Benjamin et al. (2018) point out that this threshold is too high, as p-values that are not much lower than 0.05 provide weak evidence, if at all. However, even when handling lower significance thresholds, the frequentist paradigm has several limitations, which will be discussed in the following paragraph. Bayesian statistics is an alternative paradigm that can alleviate some of these issues. With respect to this study, there are two key reasons for why Bayesian methods should generally be preferred over frequentist analyses.

First, it is impossible to quantify evidence in favor of the null hypothesis using fre-quentist analyses. In fact, the p-value does not tell us anything about how likely it is that a hypothesis is true. Neither does a significant p-value confirm that the alternative hypothesis is true, nor does a non-significant p-value tell us that the null hypothesis is true (Greenland et al., 2016). In Bayesian statistics, on the other hand, it is possible to directly compare the evidence in favor of each of the two hypotheses over. That comparison allows for a much richer interpretation of the data, as the strength of the evidence is quantified on a contin-uous scale instead of being categorized into ’significant’ or ’not significant’. Recent studies have demonstrated how this feature of Bayesian statistics can provide a much clearer per-spective on the evidence that the data contains, which may help researchers to avoid false conclusions (see e.g., Wagenmakers, Wetzels, Borsboom, & Van Der Maas, 2011). Second, Bayesian methods incorporate prior knowledge. That means, instead of having to perform meta-analyses and trying to create an image of reality by comparing separate significant or non-significant results, Bayesians can simply add their data to earlier work. In the light of prior knowledge, evidence can look a lot more (un-)convincing than evidence that stands on its own. In this study, we aimed to shed light on which of two hypotheses is more likely to be true by adding to the evidence provided by earlier research. That is why this study

1Note that neither Carlson nor Wyman and Vyse conducted binomial tests. However, they each performed

(5)

has been conducted using Bayesian methods.

In the following, I will first provide a deeper introduction to the Bayesian binomial test and other Bayesian concepts that have been used in this study. Next, reanalyses of the results from Carlson (1985) as well as Wyman and Vyse (2008) will be presented, after which the replication study is described. Finally, I will discuss the findings including its limitations and implications.

The Bayesian Binomial Test

The Bayesian framework enables researchers to perform two different kinds of analy-ses: parameter estimation and null hypothesis testing. In this section, I will outline all basic Bayesian concepts connected to these two fundamental research questions at the example of the binomial experiment. Furthermore, I will expand on some more advanced Bayesian concepts that are relevant for this study. Anyone with an extended knowledge of Bayesian statistics can skip to the ’Reanalysis’ section.

Bayesian estimation of a single parameter

Bayesian statistics is all about probabilities. One of my lecturers at the University of Amsterdam, Dr. Maarten Marsman, used to say to us: "When you think of me, you will think of coin flips. When you think of coin flips, you will think of me." The story behind this statement is that he used the concept of a coin flip as an example to explain all kinds of laws and principles surrounding probability theory. It came up at least once a lecture.2 The coin flip is a great example, because it is a very straightforward binomial probability experiment. Assuming that a coin is thrown under the exact same circumstances every single time, the coin has a certain fixed probability of landing heads. For this subsection, I will use the probability of a coin landing heads as an example of a parameter π that we would like to estimate. Consider a basic binomial experiment in which we throw the coin n times. In order to estimate the coin’s π, we can make use of Bayes’ theorem, which lies at the core of all Bayesian analyses. In Bayesian statistics, we begin by expressing our prior beliefs about the parameter in the prior, which is then updated by data in order to form the

posterior. Specifically, Bayes’ theorem states that the posterior is obtained by multiplying

the prior with the quotient of the likelihood and the marginal likelihood of the data. Thus,

posterior = likelihood × prior marginal likelihood. 2

(6)

The prior distribution. Bayesian parameter estimation begins by expressing one’s prior beliefs about π in form of a prior distribution p(π). For a binomial probability such as π, this can be done by defining a beta distribution. The probability density function of the beta distribution is given by

p(π) = π α−1(1 − π)β−1 R1 0 πα−1(1 − π)β−1dπ = Γ(α + β) Γ(α)Γ(β)π α−1(1 − π)β−1 = 1 B(α, β)π α−1(1 − π)β−1,

where Γ(x) is the gamma function, B(α, β) is the beta function, and α and β serve as parameters that shape the beta distribution. In the following, any beta distribution of π with shape parameters α and β will be denoted by

π ∼ Beta(α, β).

When expressing our prior beliefs, we can tweak α and β according to what we know about π before collecting the data. Here, α is the amount of successes (heads) and β the amount of failures (tails) prior to conducting the experiment. Consequently, the more certain we are about our prior beliefs, the higher α and β will be. A prior distribution defined as π ∼ Beta(47, 47), for instance, states that we are 95% confident that π lies within the range of 0.4 and 0.6, with π = 0.5 being regarded as the most probable case. If we have no prior certainty about π whatsoever, we may opt for a so-called uniform prior, which is given by

π ∼ Beta(1, 1). (2)

The uniform prior assigns a density of 1 to all values from 0 to 1, which is to say that all possible values of π are regarded as equally likely. A common misconception about the uniform prior is that it rids Bayes’ theorem of the subjectivity that the prior would otherwise introduce. However, the uniform prior does incorporate prior information, namely that all values are equally likely to be π. In some situations, for example when dealing with a seemingly fair coin, this can even be quite a bold statement. On the other hand, in situations where we cannot confidently say anything about π before data is obtained, the uniform prior

(7)

may be the only sensible option. For the sake of explanation in this subsection, we will stick to the uniform prior as given by Formula 2. See Figure 1 for a graphical illustration of a Bayesian parameter estimation using a uniform prior.

The likelihood. The Bayesian framework allows us to update our prior beliefs using the data that we obtained in our experiment. In order to do this, we need to compute a likelihood function of the data, which is the probability that the obtained data would occur under all possible values of π. In our case of the coin flip, the likelihood function

p(data|π) is defined by a binomial distribution as given by

p(data|π) = n k !

πk(1 − π)n−k,

which is almost the same as Equation 1, with the exception that the likelihood function considers all possible values of π instead of only one value of interest. We can use the likelihood function of the obtained data to update our prior beliefs. In order to be able to update our prior beliefs in a way that results in a true probability density function, however, we have to make sure that the result integrates to 1. That is why we divide the likelihood by the marginal likelihood of the data. The quotient of the likelihood and the marginal likelihood, by which we update the prior distribution, is often referred to as the predictive

updating factor.

The marginal likelihood. The marginal likelihood p(data) is a single number that reflects the likelihood of the data with π being marginalized. That is, we calculate the probability of the data with π being averaged over all its possible values. In order to marginalize π, we need to integrate it out of the equation:

p(data) = Z 1

0

p(data|π)p(π)dπ.

Bayes’ theorem and the posterior distribution. We can now use Bayes’ the-orem in order to update our prior distribution with the predictive updating factor, which yields the posterior distribution p(π|data). The posterior distribution reflects our beliefs about π after having observed the data. The posterior is a beta distribution given by

π ∼ Beta(α + k, β + (n − k)).

Suppose we flipped the coin 100 times, 35 out of which it turned heads. Taking the uniform prior into account, our posterior would then be given by π ∼ Beta(1 + 35, 1 + 65) (see Figure

(8)

1). Thus, mathematically, Bayes’ theorem for the parameter estimation in a binomial test can be expressed as follows:

p(π|data) | {z } posterior = likelihood z }| { p(data|π) × prior z }| { p(π) p(data) | {z } marginal likelihood . Posterior Prior

0.0

0.2

0.4

0.6

0.8

1.0

0

2

4

6

8

10

π

Density

Figure 1 . Illustration of a Bayesian parameter estimation. A uniform prior distribution is

updated by the predictive updating factor in order to obtain a posterior distribution. Given 35 heads in 100 coin flips, the posterior distribution conveys that the most likely value of

π is 0.35. The 95% credible interval depicted above the posterior distribution informs us

that 95% of the density in the posterior distribution is concentrated between 0.26 and 0.45 (rounded numbers). That means that we can be 95% confident that π lies within this range.

Bayesian parameter estimation for multiple binomials

In the above subsection I explained how to estimate π for a single parameter at the example of a coin flip. However, in some situations we might be dealing with multiple binomials. Suppose we would like to estimate the π-parameters for i different coins. Instead

(9)

of performing individual parameter estimations, we have the option to express them together in joint probabilities.

The joint prior density. First, we define a prior distribution for each of the coins as described in the previous subsection. Assuming that all coins are independent, we can multiply their prior distributions in order to form the joint prior density p(πjoint), which is then given by

p(πjoint) = p(π1) × ... × p(πi).

The joint likelihood. Similarly, we can multiply the independent likelihoods in order to obtain the joint likelihood p(datajoint|πjoint), which is then given by

p(datajoint|πjoint) = p(data11) × ... × p(datai|πi).

The joint marginal likelihood. In the same way as when dealing with a single parameter, the joint marginal likelihood p(datajoint) is the integral of the product of the joint prior and the joint likelihood:

p(datajoint) =

Z 1

0

[p(datajointjoint) × p(πjoint)dπ1, ..., dπi].

Bayes’ rule and the joint posterior density. Finally, the joint posterior density is given by the product of the individual posterior distributions of all coins:

p(πjoint|datajoint) = p(π1|data1) × ... × p(πi|datai),

and can also be obtained by using Bayes theorem for joint probabilities, which is given by:

p(πjoint|datajoint)

| {z }

joint posterior =

joint likelihood

z }| {

p(datajoint|πjoint) ×

joint prior

z }| {

p(πjoint) p(datajoint)

| {z }

joint marginal likelihood

(10)

Bayesian hypothesis testing with a single binomial

Most of the classical analyses have their Bayesian counterpart, and so does the bi-nomial test described in the introduction. Just like the frequentist version, the Bayesian binomial test is a standard null hypothesis test with a test-value π0. In its most basic form, two different models are considered. The null hypothesis, H0, is that π is equal to π0. If we were to test for the fairness of a coin, for instance, we would set π0 to 0.5. The alternative hypothesis, He, is that π is not equal to π0. That is,

H0: π = π0

He: π ∼ Binomial(n, k).

The ’e’ in He stands for ’encompassing’: He encompasses H0 in the sense that it surrounds

π0 with infinitely many other possible π-values between 0 and 1. In other words, H0 is nested in He.

For null hypothesis tests we make use of an adapted form of Bayes’ theorem that compares the evidence in favor of one hypothesis to the evidence in favor of the other. Theoretically, the Bayesian hypothesis test is given by

posterior model odds = Bayes factor × prior model odds

Prior and posterior model odds. The first thing to do in any Bayesian hy-pothesis test is to define the prior model odds. The prior model odds are a ratio of the prior model probabilities, here p(H0) and p(He). Per default we regard both hypotheses as equally likely, meaning that the prior model odds are equal to 1. However, to say it in the words of Carl Sagan: "Extraordinary claims require extraordinary evidence." That is, if we are very skeptical about one of the hypotheses, we may express that in the prior model odds.

The idea of the Bayesian hypothesis test is to update the prior model odds using the data that is obtained. This forms the posterior model odds, which are the quotient of the posterior model probabilities, here p(H0|data) and p(He|data). The posterior model odds are therefore also a ratio of how likely the hypotheses are, but given the data. Note that this is precisely what researchers are usually interested in: which of two hypotheses is more likely to be true, given the data. The only ingredient missing in order to make posterior odds out of prior odds is the Bayes factor.

(11)

The Bayes factor. The Bayes factor is a ratio of the marginal likelihoods of the data under the two hypotheses. Given this Bayes factor, we can compute the posterior model odds. Consider again the example of the coin flip. Since all prior mass centers at the point of interest for H0, the marginal likelihood of the data under H0, p(data|H0), is reduced to the likelihood function evaluated at π0 as given by Formula 1. The marginal likelihood of the data under He is given by

p(data|He) = n k ! 1 B(α, β) Z 1 0 πα+k−1(1 − π)β+n−1dπ = n k ! 1 B(α, β)B(α + k, β + n − k),

where α and β are the parameters of the beta distribution that reflects the prior beliefs about π. That means that the Bayes factor that quantifies evidence in favor of H0 is given by BF0e= p(data|H0) p(data|He) (3) = n k  π0k(1 − π0)n−k n k  1 B(α, β)B(α + k, β + n − k) = π k 0(1 − π0)n−k B(α+k, β+n−k) B(α, β) .

In sum, the Bayesian hypothesis test can be expressed in mathematical terms as

p(H0|data) p(He|data) | {z } posterior odds = p(data|H0) p(data|He) | {z } Bayes factor × p(H0) p(He) | {z } prior odds .

The Savage-Dickey density ratio. The Savage-Dickey density ratio, as first de-scribed by Dickey, Lientz, et al. (1970), is an alternative way to calculate Bayes factors. According to the authors of this paper, it was Leonard "Jimmy" Savage who discovered that Formula 3 can be rewritten in a way that makes for a much easier calculation of the Bayes factor.

The issue with the analytical method of calculating Bayes factors is that it usually involves evaluating complicated and possibly high-dimensional integrals. What the Savage-Dickey density ratio proposes is that, instead of integrating out π, we can simply compare

(12)

Posterior Prior

0.0

0.2

0.4

0.6

0.8

1.0

0

2

4

6

8

10

π

Density

Figure 2 . Illustration of a Bayesian binomial test with a test-value of 0.5. The gray dotted

line shows a uniform prior, that is updated by the data in order to form the posterior distribution depicted by the black solid line. The data are the same as in Figure 1. Using the Savage-Dickey density ratio, we can compare the heights of the prior and posterior distribution at π0. This height is 1 for the prior distribution, and 0.09 (rounded) for the posterior distribution. The resulting Bayes factor shows strong evidence in favor of He (BFe0 = 11.11) (see Appendix B for a guideline on how to interpret Bayes factors).

the heights of the prior and the posterior density functions at π0. A more detailed description and mathematical derivation of the Savage-Dickey density ratio is presented in Appendix A. Described in terms of the Savage-Dickey density ratio, the Bayes factor is given by

BF0e =

p(π = π0|data, He) p(π = π0|He)

.

Conceptually, the Bayes factor reflects the change in odds that is induced by the data (Lavine & Schervish, 1999). The Bayes factor BF0e quantifies evidence in favor of H0. Therefore, if the posterior assigns a higher density to π0 than the prior does, the Bayes factor will provide evidence in favor of H0. Conversely, if the posterior assigns a lower density to H0 than the prior (e.g., see Figure 2), the Bayes factor will provide evidence in

(13)

favor of He. See Figure 2 for an illustration of the Savage-Dickey density ratio.

Bayesian hypothesis testing with multiple binomials

Joint prior and posterior model odds. In the same way that we can calculate joint prior distributions, we can compute joint prior and posterior model odds by multiplying them. That means that if we assign the default ratio to all prior model odds, the joint prior model odds will be equal to 1. To go from joint prior model odds to joint posterior model odds, we need a joint Bayes factor.

The joint Bayes factor. The joint Bayes factor is calculated by multiplying the individual Bayes factors. As mentioned before, the Bayes factor is the ratio of the marginal likelihoods of the data under the two hypotheses. Therefore, a way to compute the joint Bayes factor is to take the quotient of the joint marginal likelihoods. The joint marginal likelihood under H0 is given by

p(datajoint|H0) = n1 k1 ! × ... × ni ki ! ×B(k++ α, n+− k++ β) B(α, β) ,

wherein k+ is the sum of all successes and n+is the sum of all trials in in all the individual datasets. The joint marginal likelihood under He is given by

p(datajoint|He) = n1 k1 ! ×B(k1+ α, n1− k1+ β) B(α, β) × ... × ni ki ! ×B(ki+ α, ni− ki+ β) B(α, β) .

Therefore the joint Bayes factor is given by

BF0e= p(datajoint|H0)

p(datajoint|He) .

Constrained hypotheses. In science, we often deal with equality-constrained hy-potheses. So-called point-null hypotheses, for instance, such as H0 described above, state equality constraints (here, that π = π0). Against such an equality-constrained hypothesis, researchers usually test an encompassing hypothesis such as He.

However, sometimes it does not make a lot of sense for π to be lower or higher than a certain number. Consider, for instance, a binomial test on a simple multiple choice test with two answer options, only one of which is correct. Absence of knowledge would lead to guessing, which would mean that π is 0.5 if everyone is operating at chance level. In this case, we may test an equality-constrained null hypothesis H0 as described above against an

(14)

inequality-constrained, restricted hypothesis Hr, which could state that π > 0.5.

The encompassing prior approach. Calculating Bayes factors for model com-parisons involving inequality-constrained models such as Hr is often connected to computa-tional problems, especially when the models in question are multidimensional. These issues largely result from the fact that inequality-constrained models only cover a certain percent-age of the probability space ranging from 0 to 1. Moreover, it is not given that H0 is nested in Hr in such a situation, which means that we cannot rely on the Savage-Dickey density ratio here. However, we can make use of a method called the encompassing prior approach, as described in Klugkist, Kato, and Hoijtink (2005). The idea behind the encompassing prior approach is that when dealing with inequality constraints in model comparisons, we introduce a third, encompassing hypothesis Hethat augments both Hrand H0. Heenables us to compute individual Bayes factors for the two hypotheses we are interested in, and then compare them to each other.

Figure 3 provides an illustration of the encompassing prior approach with uniform priors. Since H0 is a point-null hypothesis that is nested in He, BF0e can be calculated using the Savage-Dickey density ratio. To obtain BFer, we have to divide the marginal likelihood of the data under He by the marginal likelihood of the data under Hr. Because Hr is neither a point-null hypothesis, nor does it cover the complete probability space from 0 to 1, the marginal likelihood of the data under Hris quite difficult to compute analytically. However, because Hr is nested in He, the marginal likelihoods necessary to compute BFer can be approximated by drawing samples from the prior and posterior distribution of Hein a simulation (Klugkist et al., 2005). This works as follows. The Bayes factor that quantifies evidence in favor of He is given by the quotient of the marginal likelihoods of the compared models:

BFer=

p(data|He) p(data|Hr)

. (4)

Rewriting these marginal likelihoods using Bayes’ rule, we get

p(data|He) = p(data|π) × p(π|He) p(π|data, He) , p(data|Hr) = p(data|π) × p(π|Hr) p(π|data, Hr) .

(15)

BFer = p(data|π)×p(π|He) p(π|data,He) p(data|π)×p(π|Hr) p(π|data,Hr) = p(π|He) × p(π|data, Hr) p(π|Hr) × p(π|data, He) . (5)

What remains after simplifying the equation are the priors and posteriors of He and Hr. Whereas He covers the whole probability space and can therefore be evaluated at every possible value of π with relative ease, it can become quite tricky to do the same for Hr. An alternative method is to approximate the Bayes factor by drawing samples in a simulation. Because Hr is nested in He, we can rewrite the prior and the posterior of Hr as

p(π0|data, Hr) = dr× p(π|data, He)IHr, (6) p(π0|Hr) = cr× p(π|He)IHr, (7)

where 1/dr is the proportion of posterior samples that obey the constraint, whereas 1/cr is the amount of prior samples that obey the constraint, and IHr is an indicator function that

returns a 1 if π obeys the constraint, and 0 if it does not. Plugging Formulas 6 and 7 into Formula 5, we get BFer= dr× p(π0|data, He)IHr × p(π0|He) cr× p(π0|He)IHr × p(π0|data, He) = dr cr = 1 cr 1 dr .

That is, we collect data to perform a Bayesian parameter estimation of π, using He as prior. We then draw a certain amount of samples from this prior and the resulting poste-rior distribution in a simulation, for instance, by performing Markov Chain Monte-Carlo sampling (van Ravenzwaaij, Cassey, & Brown, 2016). To obtain BFer, we simply divide the proportion of prior samples that obey the constraint by the proportion of posterior samples that obey the constraint. For instance, if the true π does indeed obey the constraint, the posterior distribution of π is likely going to contain more samples that obey the constraint than its prior distribution, as the data should tend to obey the constraint given in Hr. Consequently, BFer would be low, whereas BFre would be high.

(16)

Posterior Priors Hr He H0

0.0

0.2

0.4

0.6

0.8

1.0

0

1

2

3

4

π

Density

Figure 3 . An illustration of the encompassing prior approach with its three hypotheses, He, H0, and Hr. Hehas a uniform prior. H0 is a point-null hypothesis that states that π = π0 = 0.5. Hr, depicted by the dashed line, is a restricted hypothesis that states that π > π0. The prior of Hris also uniform in the sense that it regards all values that obey its constraint as equally likely. Note that compared to He, the density of values that obey the constraint is twice as high. The reason for this is that all values below 0.5 – which is exactly half of the probability space – have been cut off, pushing the density for the remaining values up by a factor of 2 since it is a probability distribution that has to integrate to 1. The higher densities for all values that obey the constraint in Hr are a good example of another great feature inherent to Bayesian statistics. Making hypotheses more specific is automatically rewarded with higher density for the remaining area, effectively leading to more distinctive Bayes factors depending on whether the data is likely under Hr or not.

Having computed BF0e and BFer, the multiplication property of the Bayes factor allows us to multiply them in order to get the Bayes factor that compares evidence in favor of Hr with evidence in favor of H0, as in

BF0r = BF0e× BFer.

(17)

this knowledge at hand, we can move ahead to the reanalysis of the results of the studies discussed above.

Reanalysis of Previous Studies

Carlson (1985) tested whether people could identify personality descriptions of them generated by either the CPI or an ANC above chance level. He presented three different CPI personality descriptions to 56 participants and three different personality descriptions generated from an ANC to another 83 participants. 25 of the 56 participants correctly selected their own CPI result. 28 out of 83 participants correctly identified the profile that the ANC had generated for them.

Being offered three descriptions to choose from, one of which is correct and two of which are false, chance level lies at 1/3. π0 was therefore equal to 1/3 in this binomial test, with H0 stating that π = π0, and He stating that π 6= π0. Table 1 shows that the data do not offer convincing evidence for any of the hypotheses (i.e., a Bayes factor of at least 10, see Appendix B for a rough guideline on the interpretation of Bayes factors). There is moderate evidence for the hypothesis that the selection of the correct astrological profile occurs at chance level, but none of the Bayes factors are high enough to be entirely compelling.3 Table 1

Reanalysis of Carlson’s results

Type of assessment

N Ncorrect P rcorrect BF0e BFe0

CPI 56 25 0.446 1.303 0.768

ANC 83 28 0.337 7.729 0.129

Note. Ncorrectrefers to the number of people who chose the correct personality description. P rcorrect

refers to the proportion of correct decisions. Bayes factors computed in JASP (JASP Team, 2018).

Based on his results, Carlson (1985) concluded that people can select neither CPI profiles nor astrological profiles above chance level, and that therefore the probabilities of choosing one’s correct personality description as generated by a psychological personality test, πpsy, or by an ANC, πastro, are the same and equal to chance level. However, the Bayesian reanalysis suggests that we cannot conclude anything about the first two hy-potheses stated above, as none of the Bayes factors convey strong evidence. Moreover, as mentioned earlier, Carlson’s latter conclusion was based on the false assumption that two null-results would justify the conclusion that there is no difference between them. A sepa-rate analysis has to be conducted to test the hypothesis that there is a difference between the two personality assessments. Here, H0 states that πpsy = πastro = π0, whereas He 3Note that all analyses that have been conducted in this study can be found on the OSF, see

(18)

states that πpsy 6= πastro. Interestingly, the Bayes factor shows strong evidence in favor of H0 (BF0e = 10.08). This provides compelling evidence in favor of the hypothesis that both πpsy and πastro are equal to chance level, which indicates that people cannot reliably recognize themselves in personality descriptions generated by either ANCs or psychological personality tests.

Wyman and Vyse (2008) did a conceptual replication of Carlson’s study. Instead of 3 personality descriptions per assessment method, they offered participants only 2 descriptions to choose from. Accordingly, π0 changed to 0.5 for this reanalysis. Again, H0 stated that

π = π0. For the selection of ANC personality descriptions, He stated that πastro6= π0. For the selection of psychological personality descriptions, Wyman and Vyse used a restricted hypothesis Hr that stated that πpsy> π0.

The results are displayed in Table 2. Although Wyman and Vyse’s study included fewer participants, the Bayes factors for the psychological personality test are much more compelling. BFr0 shows extremely strong evidence in favor of Hr (BFr0 = 2814). This means that the data was 2814 times more likely to occur under Hr than under H0. Con-versely, the reanalysis of the results for the ANC did not yield any compelling evidence in favor of either H0 or He. There is a tendency towards H0, but more research would have to be conducted before making conclusions about whether or not the selection of the true ANC description occurs at chance level.

Table 2

Reanalysis of the Wyman and Vyse results

Assessment N Ncorrect P rcorrect Bayes factor

NEO-FFI 52 41 0.789 BF0r = 0.000 BFr0 = 2814

ANC 52 24 0.460 BF0e = 5.018 BFe0 = 0.199

Note. Ncorrectrefers to the number of people who chose the correct personality description. P rcorrect

refers to the proportion of correct decisions. Bayes factors computed in JASP (JASP Team, 2018).

Similar to Carlson, Wyman and Vyse prematurely concluded that people select their true NEO-FFI descriptions more reliably than their true ANC descriptions, without testing for that difference. This test involves an inequality-constrained hypothesis Hr that states that πastro < πpsy. Hr has been tested in the reanalysis using the encompassing prior approach, and the resulting Bayes factor shows that there is extremely strong evidence in favor of Hr (BFr0 = 560.5).

The Bayesian reanalysis confirms the conclusions drawn by Carlson as well as Wyman and Vyse. However, the Bayes factors convey different strengths of evidence (BF0e = 10.08 in Carlson’s study, BFr0 = 560.5 in Wyman and Vyse’s study). When comparing these two Bayes factors, it becomes clear that although the authors of both papers drew correct

(19)

conclusions, the evidence in favor of Hr provided by Wyman and Vyse is much stronger than the evidence in favor of H0 provided by Carlson.

Replication Study

The reanalysis of Wyman and Vyse (2008) shows extremely strong evidence in favor of the hypothesis that people can select their own personality description more reliably when it is generated by a psychological personality test as opposed to an astrological natal chart. Using the reanalyzed results of Wyman and Vyse (2008) as prior knowledge, we intended to add to the existing evidence by performing a direct replication of Wyman and Vyse’s study.4 This replication study contained two hypotheses of interest: a restricted hypothesis Hr, which states that πpsy > πastro, and a null hypothesis H0, which states that πpsy = πastro = 0.5.

Method Measures

NEO Five-Factor Inventory (NEO-FFI). The NEO-FFI 3 is a 60-item personality

inventory that measures personality in terms of 5 different personality domains: openness, conscientiousness, extrovertedness, agreeableness, and neuroticism. Each domain is assessed by 12 self-descriptive items, such as ’I try to be courteous to everyone I meet’ (agreeableness) or ’I like to be where the action is’ (extrovertedness). Items are rated on a 5-point Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree). The NEO-FFI is a shorter version of the revised NEO Personality Inventory (Costa & McCrae, 1985, 1992). We used the Dutch version of the NEO-FFI.

Basic Information Questionnaire (BIQ). The BIQ was compiled by the research group

in order to obtain the information necessary to create an astrological natal chart (date and place of birth, and gender).

Materials

Psychological Personality Descriptions. The psychological personality descriptions

were compiled out of three parts: a short description of each personality dimension, the participant’s score on each dimension, and a short interpretation of the score. This inter-pretation was based on indications of what it means to score high or low on each of the 5 scales, as they are provided in the handbook of the NEO-FFI. See Appendix C for an example part of such a psychological personality description.

4

(20)

Astrological Personality Descriptions. We generated astrological natal charts

includ-ing their interpretations via the website Astrolabe (The Basics of Astrology: What is a Natal

Chart?, 2013), using the information obtained with the BIQ. It should be noted that this

website provides a slimmer, free version of the software used by Wyman and Vyse. To our knowledge, the paid version differs from the free version only in the extensiveness of the personality description. It should be noted that we edited the descriptions slightly in order to obtain cohesive, non-prescriptive descriptions that are free of grammatical errors.5

Sample

31 participants were recruited through the ’lab’-website of the University of Amster-dam and through personal communication. Next to being familiar with one’s own astrolog-ical or psychologastrolog-ical personality profile as measured by an astrologastrolog-ical natal chart or the ’Big 5’ personality dimensions, exclusion criteria were not being able to report one’s date and place of birth and insufficient Dutch proficiency. First-year psychology students of the University of Amsterdam obtained research participation credit for taking part in the study. 2 participants were excluded because they did not attend the second meeting.

Procedure

Participants attended two meetings. During the first meeting, after signing the in-formed consent and receiving some general information on the study, participants filled in a short questionnaire on the exclusion criteria. If not excluded, the participant was given the NEO-FFI and BIQ to fill in. In the second meeting, each participant was given two astrological and two psychological personality descriptions. In each pair, one description was the one that belonged to the participant, whereas the other one belonged to a random other participant. Participants were instructed to select from each pair the description they thought fits their personality better.

Randomization and blinding

In order to randomize and blind the study, each researcher was assigned one of two roles: test material manager or investigator. The test material manager’s task was to prepare an envelope for each participant containing the 2 pairs of personality descriptions in random order. The investigator was unaware of the order in which the personality descriptions are placed in the envelope (i.e., which descriptions were the ones that belonged to the participant).

5See our preregistration document for the exact ways in which the descriptions were edited

(21)

Data analysis

Two Bayesian parameter estimations were performed on the data that our experiment yielded, whereby the results of Wyman and Vyse (2008) served as prior knowledge. We then performed a Bayesian binomial test on the data, comparing two hypotheses: H0 and Hr. With π0 equal to the chance level of 0.5, H0 stated that πpsy = πastro = π0 and Hr stated that πpsy > πastro. In order to be able to compare the two hypotheses, we added a third, encompassing hypothesis He that stated that πpsy 6= πastro. This allowed us to use the encompassing prior approach, which yielded the Bayes factor that compares the evidence in favor of Hr with the evidence in favor of H0, BFr0.

Posterior Prior

0.0

0.2

0.4

0.6

0.8

1.0

0

2

4

6

8

10

12

π

Density

Figure 4 . Bayesian parameter estimation of πpsy. The results of Wyman and Vyse (2008) are incorporated as prior knowledge and depicted by the gray dotted line. Adding the data of this study yields the posterior distribution depicted by the black solid line. The graph shows that this study largely confirms what Wyman and Vyse have found in terms of πpsy.

Results

25 out of the 29 participants that attended the second meeting correctly selected their own psychological personality description over the random one. Given this data and the prior knowledge provided by the Wyman and Vyse study, the most likely value for πpsy was

(22)

estimated to be 0.81 with a 95% credible interval of 0.72 to 0.88 (all rounded numbers; see Figure 4). The correct astrological personality description was selected by 18 out of the 29 participants. Taking the prior knowledge into account, the most likely value for πastro was estimated to be 0.52 with a 95% credible interval of 0.41 to 0.62 (all rounded numbers; see Figure 5). The Bayesian binomial test yielded extreme evidence in favor of Hr (BFr0 = 1884). Posterior Prior

0.0

0.2

0.4

0.6

0.8

1.0

0

2

4

6

8

10

12

π

Density

Figure 5 . Bayesian parameter estimation of πastro. Again, the results of Wyman and Vyse (2008) are incorporated as prior knowledge and depicted by the gray dotted line. Adding the data of this study yields the posterior distribution depicted by the black solid line. As the graph shows, also in terms of πastro the findings of Wyman and Vyse are largely confirmed by this study.

Discussion

This article provided a Bayesian reanalysis of two earlier studies and described a preregistered replication study on the topic of the accuracy of astrological and psychological personality descriptions. Moreover, it showcased how Bayesian statistics can be a useful paradigm for replications as it allows for quantification of evidence in favor of all competing hypotheses as well as the incorporation of earlier results as prior knowledge. Whereas the reanalyses provided somewhat contrary evidence, the replication study confirmed the finding

(23)

of Wyman and Vyse (2008), which is that people can recognize themselves more reliably in a personality description if it is based on a psychological personality test as opposed to an ANC.

The number of participants in this study was relatively low. As we indicated in our preregistration, we originally aimed for 50 participants.6 It is often difficult to base conclusions on such a low number of participants. However, Bayesian statistics nicely cushions these types of issues by including uncertainty (i.e., less participants lead to lower Bayes factors and larger credible intervals). In our study, the few participants that we had were enough to indicate extremely strong evidence in favor of Hr. The prior that we used already showed a strong tendency in that direction, but our Bayes factor was more than threefold the Bayes factor of the reanalysis, which indicates that our study nevertheless added valuable evidence to this research question.

Still, our study is limited in terms of its sample. The vast majority of participants were psychology students from The Netherlands. The non-diversity of our sample limits the generalization of our results. Furthermore, psychology students learn about personality aspects and psychological science, which may lead them to a higher accuracy in the selection of psychological personality descriptions. Note that we excluded participants who stated that they had an idea of how they score on the ’Big 5’ personality scales. Nevertheless, an implication for further research would be to test different or more diverse samples.

Another discussion point concerns the encompassing prior approach. As explained above, the encompassing prior approach is a sampling-based method that only approximates the Bayes factor instead of computing it analytically. It should be noted that, in order to obtain good and reliable Bayes factors using the encompassing prior approach, one should always make sure that the competing hypotheses make sense before conducting the analysis. For instance, it might be that a certain restriction is very far off the area where most prior or posterior density is concentrated. In that case, the amount of samples that obey the constraint may be very low or even equal to 0. When the amount of samples that obey the constraint is that low, the Bayes factor will not be reliable as even small differences in the amount of samples drawn will result in fluctuations when approximating the Bayes factor. For instance, consider drawing a million samples using a uniform prior with Hr stating that π > 0.5 and H0 stating that π = 0.5. Depending on whether 0, 10, or 20 of the posterior samples obey the constraint, vastly different Bayes factors will be obtained (here, BFer = inf, BFer = 50000, BFer = 25000). Therefore, researchers should plot their prior and posterior distributions before commencing with this analysis. This way, it can be controlled whether the encompassing prior approach will yield a reliable Bayes factor. Furthermore, research should be directed towards developing an analytical solution or more

6

(24)

robust sampling methods for Bayes factors involving inequality constraints.

Open Access to Supplementary Material

You can find all supplementary material concerning this study, including the analyses in this study (i.e., in the form of JASP files and R code), the raw data, our preregistration document, and more on the OSF, see https://osf.io/zfhbc.

Acknowledgements

I would like to thank my supervisor Alexandra Sarafoglou for her comments that helped me write this paper, as well as her great support throughout this Bachelor project. I would also like to thank my group mates Joran Cornelisse and Anna van der Heijden for their fantastic team work.

This bachelor’s thesis is dedicated to my parents, Dr. Joachim Draws and Elvira Draws, my siblings Kai Draws and Laura Draws, as well as my girlfriend Bruna Mendes Correa. You are the biggest support in my life and I could not have achieved this without you. Thank you all.

References

The Basics of Astrology: What is a Natal Chart? (2013). Retrieved 2018-03-01, from

http://www.astrology.com.tr/articles.asp?artID=21

Benjamin, D. J., Berger, J. O., Johannesson, M., Nosek, B. A., Wagenmakers, E.-J., Berk, R., . . . others (2018). Redefine statistical significance. Nature Human Behaviour , 2 , 6.

Carlson, S. (1985). A double-blind test of astrology. Nature, 318 , 419–425.

Costa, P. T., & McCrae, R. R. (1985). The NEO personality inventory manual. Odessa, FL: Psychological Assessment Resources.

Costa, P. T., & McCrae, R. R. (1992). Revised NEO personality inventory (NEO PI-R) and NEO five-factor inventory (NEO-FFI): Professional manual. Psychological Assessment Resources, Incorporated.

Dickey, J. M., Lientz, B., et al. (1970). The weighted likelihood ratio, sharp hypotheses about chances, the order of a Markov chain. The Annals of Mathematical Statistics, 41 , 214–226. Gelman, A., & Stern, H. (2006). The difference between ’significant’ and ’not significant’ is not

itself statistically significant. The American Statistician, 60 , 328–331.

Greenland, S., Senn, S. J., Rothman, K. J., Carlin, J. B., Poole, C., Goodman, S. N., & Alt-man, D. G. (2016). Statistical tests, P values, confidence intervals, and power: A guide to misinterpretations. European journal of epidemiology, 31 , 337–350.

JASP Team. (2018). JASP (Version 0.8.6)[Computer software]. Retrieved from https://jasp-stats.org/

(25)

Klugkist, I., Kato, B., & Hoijtink, H. (2005). Bayesian model selection using encompassing priors. Statistica Neerlandica, 59 , 57–69.

Lavine, M., & Schervish, M. J. (1999). Bayes factors: what they are and what they are not. The American Statistician, 53 , 119–122.

Lee, M. D., & Wagenmakers, E.-J. (2014). Bayesian cognitive modeling: A practical course. Cam-bridge University Press.

McCrae, R. R., & Costa, P. T. (2010). NEO Inventories professional manual. Lutz, FL: Psychological Assessment Resources.

Newall, P. (2011). Astrology and its problems: Popper, Kuhn

and Feyerabend [Web log post]. Retrieved 2018-04-05, from

https://thekindlyones.org/2011/02/14/astrology-and-its-problems-popper-kuhn-and-feyerabend/

van Ravenzwaaij, D., Cassey, P., & Brown, S. D. (2016). A simple introduction to markov chain monte–carlo sampling. Psychonomic bulletin & review, 1–12.

Wagenmakers, E.-J., Wetzels, R., Borsboom, D., & Van Der Maas, H. L. (2011). Why psychologists must change the way they analyze their data: The case of Psi: comment on Bem.

Wyman, A. J., & Vyse, S. (2008). Science versus the stars: A double-blind test of the validity of the NEO five-factor inventory and computer-generated astrological natal charts. The Journal of general psychology, 135 , 287–300.

Appendix A

Derivation of the Savage-Dickey Density Ratio

The Savage-Dickey density ratio is a way to compute Bayes factors (Dickey et al., 1970). As stated in the previously in this paper, the Bayes factor in a basic null hypothesis test is given by

BF0e=

p(data|H0)

p(data|He)

, (C1)

Consider a situation where H0 denotes the null hypothesis that a parameter π is equal to a test value π0. He augments H0 and states that π 6= π0. Thus,

p(data|H0) = p(data|π = π0) (C2)

p(data|He) = p(data|π 6= π0). (C3)

Equation (C2) can be rewritten using Bayes’ rule. Since H0is nested in He, we can condition every probability in the resulting equation on He:

(26)

p(data|π = π0) =

p(π = π0|data, He) × p(data|He) p(π = π0|He)

. (C4)

Formula (C4) can now be plugged into the numerator of of Formula (C1), and we can simplify: BF0e = p(π=π0|data,He)×p(data|He) p(π=π0|He) p(data|He) = p(π = π0|data, He) p(π = π0|He) . (C5)

After simplification, the Bayes factor is equal to quotient of the posterior and prior density evaluated at π0. All that needs to be done now is to evaluate the density value of the posterior (numerator) and the prior (denominator) at π = π0.

An example of this can be seen in Figure 2, where we have a Bayesian binomial test with a π0 of 0.5. To calculate the Bayes factor BF0e, we divide the density of the posterior distribution at π = π0 (depicted by the dot on the solid line) by the density of the prior distribution at the same point (depicted by the dot on the dotted line). The prior distribution assigns a density of 1 to π0, whereas the posterior distribution density at π0 is 0.01. The resulting Bayes factor BF0e tells us that the data is more likely under He than under H0: BF0e = p(π = π0|data, He) p(π = π0|He) = 0.1 1 = 0.1.

Note that a BF0e of 0.1 conveys the same information as the Bayes factor BFe0 of 10 as reported in the caption of Figure 2. The difference between them is that while BF0e quantifies evidence in favor of H0, BFe0 quantifies evidence in favor of He. However, they both state that the data is 10 times more likely to occur under He than under H0. See Appendix B for a guideline on the interpretation of Bayes factors.

Appendix B

(27)

Table B1 shows a rough guideline for the interpretation of Bayes factors, taken from Lee and Wagenmakers (2014), who adjusted it from Jeffreys (1961). Note that the thresholds depicted here are not strict, but rather serve as a rough orientation. That is, in contrast to

p-values in frequentist statistics, there is no agreed-upon value for Bayes factors that has

to be surpassed in order to call one’s result significant. In this study, however, we decided to draw conclusions only from Bayes factors at least as high as 10. A Bayes factor of 10 or higher classifies as ’strong’, ’very strong’, or ’extreme’ evidence in the orientation below, and it conveys that the observed data are at least 10 times more likely under one hypothesis than under the other.

Table B1

Bayes factors and the strength of evidence they convey

Bayes Factor BFe0 Strength of Evidence

> 100 Extreme evidence for He

30 - 100 Very strong evidence for He

10 - 30 Strong evidence for He

3 - 10 Moderate evidence for He

1 - 3 Anecdotal evidence for He

1 No evidence

1/3 - 1 Anecdotal evidence for H0

1/10 - 1/3 Moderate evidence for H0

1/30 - 1/10 Strong evidence for H0

1/100 - 1/30 Very strong evidence for H0

< 1/100 Extreme evidence for H0

Appendix C

Example of a Psychological Personality Description

Depending on the score, the scale agreeableness could be described as follows:

The scale agreeableness measures the orientation of an individual towards the experiences, needs and goals of others. Your Stanine score on this scale was 8, which is considered ’high’. 89% percent of the reference population scores lower on this scale. That means that you are supportive, humble, kind and cooperative; you easily empathize with people and you are able to evaluate situations from the perspective of others. Furthermore, you assume that others will support you if you need help.

(28)

R Code 1 # R e a n a l y s e s o f t h e i n e q u a l i t y −c o n s t r a i n e d h y p o t h e s e s 3 rm ( l i s t = l s ( ) ) 5 a <− 1 b <− 1 7 p r i o r . h e i g h t <− d b e t a ( 1 / 3 , a , b ) # u n i f o r m p r i o r e v a l u a t e d a t 0 . 5 9 ## C a r l s o n ( 1 9 8 3 ) 11 ## H0 : p i _ a s t r o = p i _ p s y = 1 / 3 ## He : p i _ a s t r o âĽă p i _ p s y 13 N . p s y <− 56 15 x . p s y <− 25 p o s t . h e i g h t . p s y <− d b e t a ( 1 / 3 , a + x . psy , b + (N . psy−x . p s y ) ) 17 N . a s t r o <− 83 19 x . a s t r o <− 28 p o s t . h e i g h t . a s t r o <− d b e t a ( 1 / 3 , a + x . a s t r o , b + (N . a s t r o −x . a s t r o ) ) 21 # Savage−D i c k e y d e n s i t y r a t i o 23 BF0e <− ( p o s t . h e i g h t . p s y ∗ p o s t . h e i g h t . a s t r o ) / p r i o r . h e i g h t ^ 2 25 BFe0 <− 1/ BF0e 27 BF0e BFe0 29 ## Wyman & Vyse ( 2 0 0 8 )

31 ## H0 : p i _ a s t r o = p i _ p s y = 0 . 5 ## Hr : p i _ a s t r o < p i _ p s y 33 ## He : p i _ a s t r o âĽă p i _ p s y 35 N <− 52 37 x . p s y <− 41 p o s t . h e i g h t . p s y <− d b e t a ( 0 . 5 , a + x . psy , b + (N−x . p s y ) ) 39 x . a s t r o <− 24 41 p o s t . h e i g h t . a s t r o <− d b e t a ( 0 . 5 , a + x . a s t r o , b + (N−x . a s t r o ) ) 43 ### Savage−D i c k e y d e n s i t y r a t i o 45 BF0e <− ( p o s t . h e i g h t . p s y ∗ p o s t . h e i g h t . a s t r o ) / p r i o r . h e i g h t ^ 2 BFe0 <− 1/ BF0e 47 BF0e 49 BFe0 51 ### E n c o m p a s s i n g p r i o r a p p r o a c h 53 N . s a m p l e s <− 1 e 7 55 c o u n t . p r i o r <− 0 c o u n t . p o s t e r i o r <− 0 57 p r i o r . p s y <− r b e t a (N . s a m p l e s , 1 , 1 ) 59 p r i o r . a s t r o <− r b e t a (N . s a m p l e s , 1 , 1 ) 61 p o s t . p s y <− r b e t a (N . s a m p l e s , a + x . psy , b + (N−x . p s y ) ) p o s t . a s t r o <− r b e t a (N . s a m p l e s , a + x . a s t r o , b + (N−x . a s t r o ) ) 63 f o r ( i i n 1 :N . s a m p l e s ) { 65 i f ( p r i o r . p s y [ i ] > p r i o r . a s t r o [ i ] ) { c o u n t . p r i o r <− c o u n t . p r i o r + 1 67 } i f ( p o s t . p s y [ i ] > p o s t . a s t r o [ i ] ) { 69 c o u n t . p o s t e r i o r <− c o u n t . p o s t e r i o r + 1

(29)

} 71 } 73 c r <− 1 / ( c o u n t . p r i o r / N . s a m p l e s ) d r <− 1 / ( c o u n t . p o s t e r i o r / N . s a m p l e s ) 75 BFre <− c r / d r 77 BFre 79 BFr0 <− BFe0 ∗ BFre BFr0 81 ## Our Study 83 ## H0 : p i _ a s t r o = p i _ p s y = 0 . 5 ## Hr : p i _ a s t r o < p i _ p s y 85 ## He : p i _ a s t r o âĽă p i _ p s y 87 a . p s y <− 1 + 41 b . p s y <− 1 + 11 89 p r i o r . h e i g h t . p s y <− d b e t a ( 0 . 5 , a . psy , b . p s y ) 91 a . a s t r o <− 1 + 24 b . a s t r o <− 1 + 28 93 p r i o r . h e i g h t . a s t r o <− d b e t a ( 0 . 5 , a . a s t r o , b . a s t r o ) 95 N <− 29 97 x . p s y <− 25 p o s t . h e i g h t . p s y <− d b e t a ( 0 . 5 , a . p s y + x . psy , b . p s y + (N−x . p s y ) ) 99 x . a s t r o <− 18 101 p o s t . h e i g h t . a s t r o <− d b e t a ( 0 . 5 , a . a s t r o + x . a s t r o , b . a s t r o + (N−x . a s t r o ) ) 103 ### Savage−D i c k e y d e n s i t y r a t i o 105 BF0e <− ( p o s t . h e i g h t . p s y ∗ p o s t . h e i g h t . a s t r o ) / ( p r i o r . h e i g h t . p s y ∗ p r i o r . h e i g h t . a s t r o ) BFe0 <− 1/ BF0e 107 BF0e 109 BFe0 111 ### E n c o m p a s s i n g p r i o r a p p r o a c h 113 c o u n t . p r i o r <− 0 c o u n t . p o s t e r i o r <− 0 115 p r i o r . p s y <− r b e t a (N . s a m p l e s , a . psy , b . p s y ) 117 p r i o r . a s t r o <− r b e t a (N . s a m p l e s , b . a s t r o , b . a s t r o ) 119 p o s t . p s y <− r b e t a (N . s a m p l e s , a . p s y + x . psy , b . p s y + (N−x . p s y ) ) p o s t . a s t r o <− r b e t a (N . s a m p l e s , a . a s t r o + x . a s t r o , b . a s t r o + (N−x . a s t r o ) ) 121 f o r ( i i n 1 :N . s a m p l e s ) { 123 i f ( p r i o r . p s y [ i ] > p r i o r . a s t r o [ i ] ) { c o u n t . p r i o r <− c o u n t . p r i o r + 1 125 } i f ( p o s t . p s y [ i ] > p o s t . a s t r o [ i ] ) { 127 c o u n t . p o s t e r i o r <− c o u n t . p o s t e r i o r + 1 } 129 } 131 c r <− 1 / ( c o u n t . p r i o r / N . s a m p l e s ) d r <− 1 / ( c o u n t . p o s t e r i o r / N . s a m p l e s ) 133 BFre <− c r / d r 135 BFre 137 BFr0 <− BFe0 ∗ BFre BFr0

Referenties

GERELATEERDE DOCUMENTEN

With arguing that individuals high in agreeableness and need for structure are more satisfied in strong informal hierarchies, this paper provides clarity about

First it was expected that the brand personality perceived as Excited, Sincere and Competent positively influence the attractiveness of both the product and

The category of positive social relationships shows a marginal significant difference (p = 0.084) in frequencies between the low and high extraversion group, indicating

Summarizing the basic principles of the Scharff Technique: (a) using a friendly approach, (b) taking perspective of the source, (c) building a good relationship with the source,

External agency and function in society is more present in the life stories before treatment, while internal agency is enhanced after treatment.. Lack of agency was present in

We found that profiles including all three identity dimensions distinguished in the current study (i.e., commitment, in-depth exploration, and reconsideration) were already quite

Dippenaar (2004:188) concludes that “course design is a flexible process which is never complete, as the course designer has constantly to re-evaluate and re-design”. This

In addition to replicate the present research with even older people, further research could be conducted in order to examine if other positive psychology interventions lead to