• No results found

Recursive partitioning of growth curve models using GLMM trees

N/A
N/A
Protected

Academic year: 2021

Share "Recursive partitioning of growth curve models using GLMM trees"

Copied!
19
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Master’s Thesis Psychology,

Methodology and Statistics Unit, Institute of Psychology Faculty of Social and Behavioral Sciences, Leiden University Date: 23-08-2019

Student number: s1428047 Supervisor: Marjolein Fokkema

Recursive partitioning of growth curve models using

GLMM trees

(2)

Abstract

The generalized linear mixed-effects model tree algorithm (GLMM tree; Fokkema, Smits, Zeileis,, Hothorn, & Kelderman, 2018) allows for the detection of subgroups with different parameters of a (generalized) linear model, while accounting for the clustered structure of a dataset. Longitudinal data can also be considered as clustered data in which observations on the same variable(s) over time are clustered within subjects. In the current study, we investigate which settings are optimal for fitting GLMM trees in order to detect subgroups in growth curve analyses using the R package glmertree (version 0.1-2; Fokkema & Zeileis, 2018). Three model fitting parameters settings were tested to assess which yields highest predictive accuracy and lowest complexity: 1) initializing the model estimation with random effects or the tree structure and 2) using cluster-level stability tests or observation-level parameter stability tests for selecting partitioning variables and 3) estimating a random intercept and/or a random slope in the random-effects specification. A simulation study was carried out based on continuous response data only. Therefore, the estimated models are referred to as LMM trees in the current study. The best performing growth curve model in terms of tree complexity was the random-intercept LMM tree with random effect initialization and observation-level parameter stability tests. The best performing growth curve model in terms of predictive accuracy was the random-intercept-and-slope LMM tree with tree initialization and cluster-level parameter stability tests.

(3)

Table of contents

ABSTRACT ... 2

INTRODUCTION ... 4

T

HE

GLMM

TREE ALGORITHM

... 5

M

ODEL FITTING PARAMETERS OF A

GLMM

TREE

... 5

Parameter stability tests. ... 5

Initialization. ... 6

Random-effects specification. ... 6

METHODS ... 8

D

ATA GENERATING DESIGN

... 8

A

SSESSMENT OF PERFORMANCE

... 9

Predictive accuracy. ... 9

Tree accuracy. ... 9

Repeated Measures ANOVA ... 10

S

OFTWARE

... 10

RESULTS ... 11

E

FFECTS OF DATA

-

GENERATING PARAMETERS

... 11

M

AIN EFFECTS OF THE MODEL

-

FITTING PARAMETERS

... 11

I

NTERACTION EFFECTS OF THE MODEL

-

FITTING PARAMETERS

... 13

DISCUSSION ... 16

S

UMMARY

... 16

L

IMITATIONS AND FURTHER RESEARCH

... 16

C

ONCLUSION

... 16

(4)

Introduction

Methods that capture growth or change in variables over time are becoming progressively more important to describe developmental patterns in many areas of psychology. The most used methods for longitudinal analyses are performed using hierarchical linear modeling or structural equation modeling (e.g. growth curve modeling; GCM). For example, a growth curve analysis was performed to assess the effect of physical activity and gender on depression in adolescence and emerging adulthood (McPhie & Rawana, 2015). Boys had lower levels of depression in mid-adolescence and slower in- and declines in depression over time compared to girls. Thus, detection of subgroups with different patterns of growth over time data could expand the knowledge of numerous constructs in psychology.

Decision-tree methods are particularly useful for subgroup detection. Decision-tree methods partition observations into successive subsets (i.e., subgroups), which can be visualized as a tree-like structure with inner and terminal nodes, that describe subgroups of observations with similar values for the outcome (Thurston & Miyamoto, 2018). The partition (i.e., splits) in a tree structure is created by separating the observations on the value of a single predictor variable at a time. Because each split in the successive partitioning of observations is conditional on the previous split, decision-tree methods are also referred to as recursive

partitioning. Moreover, decision trees are highly popular due their interpretability (Zeileis,

Hothorn, & Hornik, 2008), can handle many potential predictor variables at once and can automatically detect (high-order) interactions between predictor variables (Strobl, Malley, & Tutz, 2009).

In many instances, researchers may want to detect subgroups in clustered or nested datasets, for example when observations on patients are nested within hospitals or when observations over time are clustered within subjects (i.e., longitudinal data). In order to perform those analyses, the clustered or nested structure should be taken into account by estimating so-called random effects, yielding a mixed-effects model (Cooper & Patall, 2009; Higgins, Whitehead, Turner, Omar & Thompson, 2001). If the clustered structure is not taken into account, this could result in the detection of spurious subgroups and inaccurate predictor variable selections for tree-based methods (Martin, 2015, as cited in Fokkema et al., 2018). Fokkema et al. (2018) therefore proposed the generalized linear mixed-effect model tree (GLMM tree) algorithm. GLMM tree allows for the detection of subgroups with different

(5)

parameters of a (generalized) linear model, while accounting for the clustered structure of a dataset.

As previously stated, longitudinal data can also be considered as clustered data in which observations on the same variable(s) over time are clustered within subjects. This would mean that GLMM trees can be used to detect subgroups in growth curve analyses.

The GLMM tree algorithm

The GLMM tree algorithm takes an iterative approach to partition the fixed-effects in the model while still adjusting for the random effects (Fokkema et al., 2018):

1) The random effects are initially set to 0, since they are initially unknown.

2) Given the current random effects predictions, a GLM tree (subgroups structure) is estimated.

3) Given the current GLM tree (subgroup structure), a mixed-effects model is estimated with node-specific fixed-effects parameters and global random-effects parameters. 4) The steps 2 and 3 are repeated until convergence.

Convergence of the algorithm is reached when the tree does not change from one iteration to the next.

In step (2) of the GLMM algorithm, the estimated GLM tree is constructed using model-based recursive partitioning (MOB; Zeileis et al., 2018; Fokkema et al., 2018) which cycles iteratively through the following steps:

1) Fit the parametric model (i.e., GLM) once to all observations in the current node. 2) Parameter stability tests assess whether splitting the sample with respect to one of the

partitioning variables might capture instabilities in the model parameters and thus improve the fit.

3) If there is some overall parameter instability, split the sample with respect to the partitioning variable associated with the highest instability.

4) Repeat the procedure in each split, until the null hypothesis of parameter stability can no longer be rejected (or the subsets become too small).

Model fitting parameters of a GLMM tree

Parameter stability tests. GLMM tree provides the option to employ cluster- or observation-level parameter stability tests in the estimation of the tree structure in step (2) of

(6)

the MOB algorithm. For partitioning variables measured at the cluster level, observation-level parameter stability tests will likely yield higher type-I error than for partitioning variables at the observation level, because the parameter stability test does not take clustering into account. Instead, cluster-level stability tests account for the fact that observations within clusters are correlated. The clusters in the current study, and longitudinal studies in general, are subjects. Thus, individual observations are nested within subjects which would probably result in better performance with cluster-level stability tests. Moreover, the current study will assume that the potential partitioning variables for GCMs are time-invariant and thus are measured at the cluster-level. Following this line of reasoning, cluster-level stability tests will probably yield more accurate results than observation-level parameter stability tests in partitioning GCMs.

Initialization. Instead of initializing estimation with the tree structure, the GLMM tree algorithm could initialize estimation with the random effects (step (3) of the GLMM tree algorithm). If the tree structure is estimated first, the tree may capture some of the variation due to random effects through splits involving cluster-level variables. In case of time-invariant partitioning variables, these may capture variation that is due to random variation between clusters, it could be better to initialize the estimation of the model with the random effects instead of the tree structure.

Random-effects specification. As previously stated, the GLMM tree model has a random effect part which may include random intercepts and/or random slopes. In growth curve analyses, random intercepts allow for modeling inter-individual variation, which is constant over time. Random slopes would model inter-individual variation in growth over time. The addition of random slopes could yield a simpler tree structure, because the inter-individual variation in growth over time could be captured by the random effects in the model and thus not be captured by the tree, and it could also yield more accurate results.

In Fokkema et al. (2018), GLMM trees were found to perform well in detecting subgroups in clustered cross-sectional datasets. However, it is unclear which model fitting parameter settings of the GLMM tree algorithm are optimal for partitioning GCMs. The current study aims to address the following research questions in a simulation study:

1) Do cluster-level parameter stability tests yield more accurate results (or a simpler model) than observation-level stability tests?

2) Does initializing with estimating the random effects yield more accurate results (or a simpler model) than estimating the tree structure?

(7)

3) Does the addition of random slopes to random intercepts in the random-effects specification yield more accurate results (or a simpler model)?

The remainder of this thesis is structured as follows: the second section of this study describes the methodological framework of the simulation study. In the third section, the results of the simulation study will be presented. In the last section of this study, the results will be summarized and limitations will be discussed with suggestions for future research.

(8)

Methods

Data generating design

In the simulation study, the following data-generating parameters were varied: - Sample size (i.e., the number of clusters): N = 80, N = 200.

- Number of potential partitioning variables: P = 8 and P = 28.

- Intercorrelation between the potential partitioning variables: ρ = 0 and ρ = 0.3. - Population standard deviation of the normal distribution from which the

cluster-specific intercepts were drawn: σi = 0, σi = 1 and σi = 2.

- Population standard deviation of the normal distribution from which the random slopes were drawn: σs = 0, σs = 0.1 and σs = 0.4.

The parameters were varied using a fully crossed factorial design with (2x2x2x3x3=) 72 cells. The effect of the data-generating parameters was examined using a repeated measures

ANOVA to assess the main effects. For each cell of the design, 50 datasets were generated. Three partitioning variables (x1, x2 and x3) were true partitioning variables (Figure 1). Thus, the true tree size is seven nodes. The remaining (P-3) potential partitioning variables were noise variables. The partitioning variables were drawn from a normal distribution with mean µ = 0 and variance σ2 = 5. There were five equidistant measurement occasions for each subject (time points 0-4). Random intercepts and slopes were generated from a normal distribution with mean µ = 0 and variance σi2 and σs2 (the values of which were varied according to the last facet of the data-generating design above). The value of the outcome variable was calculated as the sum of the population-level effects (based on the terminal nodes of the tree), the random intercept and slope of time, and a random error term (σε = 5).

Errors were not correlated within subjects.

Population standard deviation of the normal distribution from which the cluster-specific intercepts were drawn: σi = 0, σi = 1 and σi = 2. As previously stated, the error variance was simulated for every dataset with σε = 5. The interclass correlation coefficient

(ICC) can be calculated by dividing the random intercept variance by the total variance. This would result in ICCs of {0, 0.167, 0.444} for the LMM trees, which would mean that 0%, 16,7% and 44,4% of observed variation in the outcome is explained by the random intercepts. For models with random intercepts and random slopes however, the ICC would differ at each unit of the predictors, because the ICC will be a function of the variable(s) for which random

(9)

slopes are specified. Hence, the ICC for this kind of models cannot be understood simply as proportion of variance (Goldstein, Browne, & Rasbash, 2010). The ICC’s were calculated for the random intercept LMM trees, but not analyzed in the current study.

Figure 1. Plot of a correctly recovered LMM tree. The true covariates (x1, x2 and x3) are selected as partitioning variables resulting in a tree with seven nodes: three inner nodes and four terminal nodes (i.e. subgroups). The x-axes represent time. The y-axes represent the outcome variable. The red line in the terminal nodes represents the estimated regression line over time. The sample size (n) represents the sum of observations (five measurement points for each subject).

Assessment of performance

Predictive accuracy. Test observations were generated to determine the predictive

accuracy of the fitted models. The test datasets were generated from the same population as the training datasets, with a fixed sample size of N = 200. The predictive accuracy was assessed by calculating the mean squared error (MSE); the mean squared difference between observed and predicted values for every test dataset. Lower MSE values indicate a better predictive accuracy.

Tree accuracy. The tree accuracy was assessed based on the size of the tree. There were three true splits and partitioning variables, which results in a true tree size of seven

(10)

(Figure 1). Tree size was also used as a measure of complexity, with larger numbers of nodes indicating higher complexity.

Repeated Measures ANOVA. Repeated measures analysis of variance (rANOVA)

was conducted to analyze the predictive accuracy and tree size of the LMM trees through the main effects of the data-generating parameters and the main- and interaction effects of the model-fitting parameters. The main- and interactions effects will be assessed with a significance level of p < 0.05 and the effect size η2 (Cohen, 1988)1. In case of significant main- and/or interaction effects, post hoc comparisons were conducted using Tukey HSD tests to identify which means of the factor in question differed from the rest.

Software

All analyses were performed in R (R Core Team, 2013). The glmertree package (version 0.1-2; Fokkema & Zeileis, 2018) was used for fitting the GLMM trees. The outcome variable in the simulation study is continuous. Therefore, the estimated models are referred to as LMM trees in the current study. Note that the default setting of the LMM tree consists of a random-intercept-model with tree initialization and observation-level parameter stability tests. The LMM trees were estimated using the lmertree function of the glmertree package (version 0.1-2; Fokkema & Zeileis, 2018). Linear model (LM) trees were estimated using the lmtree function from the partykit package (version 1.2-3; Hothorn, Seibold & Zeileis, 2019). These LM trees were estimated to investigate whether in general estimating mixed-effects tree models yields more accurate results (or a simpler model) than fixed-effects tree models.

1 The following rule-of-thumb for effect size was used in this simulation study for η2: small (f = .10, η2 = .0099), medium (f = .25, η2 = .0588) and large (f = .40, η2 = .1379).

(11)

Results

In order to compare the performance of the model-fitting parameters, the results will be discussed as follows: the effects of data-generating parameters and the main- and interaction effects of the model fitting parameters.

Effects of data-generating parameters

Tree size. The effect of the most important predictors for tree size were sample size

(F(1, 703) = 120.94, p < 0.001, η2 = .08) and the population variance of the random intercept (F (2, 703) = 44.01, p < 0.001, η2 = .06). In case of a higher sample size, the default LM trees and LMM trees with random intercepts and/or slopes tend to estimate larger trees, especially with larger σb values. If the sample size is low, random-intercept-and-slope LMM trees with random effects initialization has difficulties to detect partitioning variables to split the observations which resulted in an average tree size of 1.44 nodes (SD = 0.97).

Predictive accuracy. The effect of the most important data-generating parameters for predictive accuracy were the sample size (F(1, 703) = 143.52, p < 0.001, η2 = .03), the

population variance of the random intercept (F (2, 703) = 900.13, p < 0.001, η2 = .48) and the population variance of the random slope (F(2, 703) = 435.35, p < 0.001, η2 = .23). As the variance of the random intercept and random slope increases, the MSE value tends to increase, especially between σi = 1 (mean MSE = 8.23) to σi = 2 (mean MSE = 12.08).

Main effects of the model-fitting parameters

Tree size. The tree size for the main effects of the model fitting parameters are depicted in Figure 2. The main effect for initialization yielded an F ratio of F(1, 703) = 100.08, p < 0.001, η2 = .07, indicating a significant higher tree size for initializing estimation with the tree structure (tree; M = 9.25 nodes, SD = 5.31) than initializing estimation with the random effects (ranef; M = 6.65 nodes, SD = 3.19). The main effect for parameter stability tests yielded an F ratio of F(1, 703) = 80.82, p < 0.001, η2 = .08, such that the tree size was significantly higher for observation-level parameter stability tests (obs; M = 9.35 nodes, SD = 6.10) than for cluster-level parameter stability tests (clust; M = 7.06 nodes, SD = 2.31). The main effect for the random-effects specification (Figure 2) yielded an F ratio of F(2, 703) = 3.77, p < 0.05, η2 = 0.005, indicating a significant difference between no random effects (none; M = 9.11 nodes, SD = 5.52), random intercept (int; M = 8.36 nodes, SD = 4,14) and

(12)

random-intercept-and-random-slope LMM trees (slope; M = 7.60 nodes, SD = 4.84). A post hoc Tukey test showed that the random intercept and the random-intercept-and-random-slope LMM trees differed significantly at p < 0.05. However, the LM trees did not significantly differ from the LMM trees.

Figure 2. Tree size by type of initialization, parameter stability test and random-effects specification. The y-axes represent the number of nodes in trees; the x-axes represent the different model-fitting parameters; the horizontal lines represent the ‘true’ tree size (7).

Predictive accuracy. The MSE value for the main effects of the model fitting

parameters are depicted in Figure 3. The main effect for initialization yielded an F ratio of F(1, 703) = 9.66, p < 0.01, η2 = 0.002, such that the MSE values were significantly higher for initializing estimation with the random effects (mean MSE = 9.19, SD = 3.41) than initializing estimation with the tree structure (mean MSE = 8.85, SD = 3.22). The main effect for parameter stability tests yielded an F ratio of F(1, 703) = 88.97, p < 0.001, η2 = 0.02, indicating a significant higher MSE value for observation-level parameter stability tests (mean MSE = 9.49, SD = 3.75) than for cluster-level parameter stability tests (mean MSE = 8.48, SD = 2.68). The main effect for the random-effects specification yielded an F ratio of F(2, 703) = 14.76, p < 0.001, η2 = 0.007, indicating a significant difference between LM trees (mean MSE = 8.83, SD

(13)

= 3.21), random intercept (mean MSE = 8.70, SD = 3.04) and random-intercept-and-random-slope LMM trees (mean MSE = 9.35, SD = 3.56). A post hoc Tukey test showed that the random intercept and random-intercept-and-random-slope LMM trees differed significantly at p < 0.001. Moreover, the LM trees and random-intercept-and-random-slope LMM trees differed significantly at p < 0.05. However, the LM trees did not significantly differ from the random intercept LMM trees.

Figure 3. MSE values by type of initialization, parameter stability test and random-effects specification. The y-axes represent MSE value; the x-axes represent the different model-fitting parameters.

Interaction effects of the model-fitting parameters

Tree size. The interaction effect of initialization and parameter stability test on tree

size was significant with F(1,703) = 256.82, p < 0.001, η2 = .18, indicating that the effect of initialization was greater for cluster-level parameter stability tests than observation-level parameter stability tests. The interaction effect of initialization and random-effects specification on tree size was also significant with F(1,703) = 5.95, p < 0.05, η2 = .004, indicating that the effect of initialization was greater for random intercept LMM trees than random-intercept-and-random-slope LMM trees. Moreover, the interaction effect of

(14)

parameter stability test and random-effects specification on tree size was significant with F(2,703) = 5.99, p < 0.001, η2 = .008, indicating that the effect of random-effects specification was greater for cluster-level parameter stability tests than observation-level parameter

stability tests.

The three-way interaction between the model-fitting parameters (Figure 4) yielded an F ratio of F(1, 703) = 12.00, p < 0.001, η2 = 0.008. The best performing model based on tree size is the random-intercept LMM tree with random effects initialization (M = 7.00, SD = 1.42). The most inferior model is the random-intercept LMM tree (M = 12.08, SD = 6.14) based on tree size.

Figure 4. Tree size for interactions between initialization, parameter stability test and random-effects specification. The y-axis represent the number of nodes in trees; the x-axis represent the different model-fitting parameters; the horizontal lines represent the ‘true’ tree size (7).

Predictive accuracy. The interaction effect of initialization and random-effects

specification in terms of predictive accuracy was significant with F(1,703) = 32.04, p < 0.001, η2 = .009, indicating that the effect of initialization was greater for cluster-level parameter stability tests than observation-level parameter stability tests. The interaction effect of

(15)

stability and random-effects specification (Figure 4) in terms of predictive accuracy was significant with F(2,703) = 18.15, p < 0.001, η2 = .01, indicating that the effect of

initialization was greater for random intercept LMM trees than random-intercept-and-random-slope LMM trees.

The three-way interaction between the model-fitting parameters (Figure 5) yielded an F ratio ofF(1, 703) = 34.20 p < 0.001, η2 = .009. The best performing model in terms of predictive accuracy is the random-intercept-and-random-slope LMM tree with cluster-level parameter stability tests (M = 8.41, SD = 2.69). The most inferior model is the random-intercept-and-slope LMM trees with random effects initialization (M = 11.20, SD = 4.26).

Figure 5. MSE value for interactions between initialization, parameter stability test and random-effects specification. The y-axis represent the MSE-value; the x-axis represent the different model-fitting parameters.

(16)

Discussion

Summary

The results of the simulation study showed that cluster-level parameter stability tests yield significantly higher predictive accuracy and lower tree size than observation-level parameter stability tests. Thus, for time-invariant partitioning variables (i.e. measured at the cluster level), observation-level parameter stability tests will likely yield higher type-I error.

Furthermore, random effect initialization yields significantly higher predictive accuracy and lower complexity in tree size than tree initialization. However, the random-intercept-and-random-slope LMM trees with random effects initialization had difficulties to detect any splits, especially with lower sample size. With random-effects initialization, subgroup

differences in growth over time may be captured by the random effects instead of the tree. For trees with more complex random-effects specification, this will yield lower type-I error but higher type-II error.

Limitations and further research

The conclusions of the tree complexity in this study are based on the tree size. The fitted models had to detect the true covariates (x1, x2 and x3) as partitioning variables resulting in a tree of seven nodes with three inner nodes and four terminal nodes (i.e. subgroups). We assumed that the ‘true’ tree size recovered the correct partitioning variables. However, the structure of fitted trees was never analyzed in this study. Future research could improve the current study by investigating stability of the selected splitting variables and values.

A simulation study gives insight into the dynamics between different data-generating parameters. The values of the data-generating parameters in the simulation study were chosen carefully to reflect reality, but the effects of the model-fitting parameters should be assessed in actual real data. Future research has to confirm whether the theoretical results from the simulation study are accurate.

Conclusion

LMM trees with either cluster-level parameter stability tests or random-effects initialization will improve performance in terms of tree size and predictive accuracy for detecting

subgroups with different growth trajectories in longitudinal data. However, if the random-effects specification is more complex (i.e. random-intercept-and-random-slope LMM trees),

(17)

initializing with random effects becomes less of a good idea. In this case, cluster-level parameter stability tests should be preferred to improve performance. All in all, the best performing model based on tree size is the random-intercept LMM tree with random effects initialization. The best performing model in terms of predictive accuracy is the random-intercept-and-random-slope LMM tree with cluster-level parameter stability tests.

(18)

References

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum

Cooper, H., & Patall, E. A. (2009). The relative benefits of meta-analysis conducted with individual participant data versus aggregated data. Psychological Methods, 14(2), 165

Fokkema, M., Smits, N., Zeileis, A., Hothorn, T., & Kelderman, H. (2018). Detecting

treatment-subgroup interactions in clustered data with generalized linear mixed-effects model trees. Behavior Research Methods, 50(5), 1-19

Fokkema, M., & Zeileis, A. (2018). glmertree: Generalized linear mixed model trees. R package version 0.1-2

Goldstein, H., Browne, W., Rasbash, J. (2010). Partitioning Variation in Multilevel Models.

Understanding Statistics, 1(4), 223-231

Higgins, J., Whitehead, A., Turner, R. M., Omar, R. Z., & Thompson, S. G. (2001). Meta- analysis of continuous outcome data from individual patients. Statistics in Medicine,

20(15), 2219–2241

Hothorn, T., Seibold, H., & Zeileis, A. (2019). partykit: A toolkit for Recursive Partitioning. R package version 1.2-3

McPhie, M. L., & Rawana, J. S. (2015). The effect of physical activity on depression in adolescence and emerging adulthood: A growth-curve analysis. Journal of

Adolescence, 40, 83-92

R Core Team (2013). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. http://www.R-project.org/.

(19)

Strobl, C., Malley, J., & Tutz, G. (2009). An introduction to recursive partitioning: Rationale, application, and characteristics of classification and regression trees, bagging, and random forests. Psychological Methods, 14(4), 323

Thurston, H. & Miyamoto, S. (2018). The use of model based recursive partitioning as an analytic tool in child welfare. Child Abuse & Neglect, 79, 293-301

Zeileis, A., Hothorn, T., & Hornik, K. (2008). Model-based Recursive Partitioning. Journal of

Referenties

GERELATEERDE DOCUMENTEN

Specif- ically, interaction demands of both target and judge were manipulated in terms of whether or not the target knows they are being judged and whether or not the judge knows

are defined: The current level, the number of children the current node has, the maximum level specified, also, the current branchmult, and whether the current node should be

Gemiddelde drift % van verspoten hoeveelheid spuitvloeistof per oppervlakte-eenheid naar de lucht gemeten over 6m hoogte op 5,5 m vanaf de laatste dop standaardsituatie, bij

The research question was: ”Can a generalised linear risk classification model be improved by including the strongest two-way interactions measured in a tree-based gradient

This problem has led to the formulation of the following research question: ‘In what way can ecosystem services, in a circular context, contribute to noise and

Zo zal bij hoge mastery-approach doelen, een hoge mate van extraversie een positieve invloed hebben op de relatie tussen mastery-approach doelen met voice-gedrag en een lage mate

In chapter 6 the measured frequency-density functions are used for calculating the speetral noise intensity of the Barkhausen signal. The noise spectra thus

and 3c, the first and second loading vector are shown. They show the T wave in respectively time and space. The time vector corresponds to the average T wave in that window. The