• No results found

Does the “Sleeper Effect” exist? : a multilevel meta-analysis on stability versus change from immediate to follow-up effects of parenting interventions on reduced disruptive child behavior

N/A
N/A
Protected

Academic year: 2021

Share "Does the “Sleeper Effect” exist? : a multilevel meta-analysis on stability versus change from immediate to follow-up effects of parenting interventions on reduced disruptive child behavior"

Copied!
52
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Faculty of Social and Behavioral Sciences

Graduate School of Child Development and Education

Does the “Sleeper Effect” exist? A multilevel meta-analysis on

stability versus change from immediate to follow-up effects of

parenting interventions on reduced disruptive child behavior.

Research Master Child Development and Education Thesis 2

Jolien van Aar

Supervised by: Dr. P. H. O. Leijten 2nd Reviewer: Prof Dr. G. J. Overbeek 3rd Reviewer: Dr. J. J. Asscher

(2)

2 Abstract

Background: Parenting interventions aim to start a chain of change in which positive parent

and child behavior reinforce each other. To what extent do the effects of parenting

interventions on child behavior increase after intervention termination (i.e. sleeper effects) as opposed to remaining stable or even fading out?

Objectives: To test whether there are sleeper-, or fade-out effects by analyzing the difference

between immediate and later follow-up effects of parenting interventions on reduced disruptive child behavior using a three-level meta-analytical approach.

Data sources: PsycINFO, Medline, and Eric were searched with all combinations of the

terms ‘parenting’, ‘child’, ‘disruptive behavior’ and ‘follow-up’. The last search was run on 6 April 2015. In addition, reference lists of relevant systematic reviews and meta-analyses and experts in the field were consulted.

Review methods: Forty randomized controlled trials with a controlled follow-up of at least

one month were included (n = 6232), generating 42 effect sizes. The standardized controlled mean difference from posttest to follow-up between intervention and control conditions were computed using means and standard deviations.

Results: A three-level-random-effects model estimated an overall effect size of .036 (95% CI

= -.040 to .111, p = .348) indicating that there was no further reduction of disruptive child behavior after immediate posttest. This means that there are no sleeper- or fade-out effects of parenting interventions.

Implications: Parenting interventions in general lead to sustained effects until later follow-up.

Referring to a sleeper effect when no follow-up measurement is performed is not justified.

(3)

3 Does the “Sleeper Effect” exist? A multilevel meta-analysis on stability versus change from immediate to follow-up effects of parenting interventions on reduced disruptive

child behavior.

Parenting interventions are widely used to support parents in optimizing family functioning, and are considered one of the most effective strategies to reduce child disruptive behavior (Kazdin & Weisz, 1998; McCart, Priester, Davies & Azen, 2006). The aim of these parenting programs is to prevent or treat child disruptive behavior of children, through improvement in parenting cognitions and/or practices (Kazdin & Weisz, 1998). However, despite their goal to affect child behavior, some interventions show positive effects on parental cognitions and/or practices, but not on child behavior (e.g., Deković et al., 2010). When minimal or no immediate effects are found, researchers sometimes refer to possible “sleeper effects” (e.g. Barlow et al., 2007; Whittingham, Sofronoff, Sheffield & Sanders, 2009). Sleeper effects then mean that some changes in child behavior become more

pronounced at a later follow-up compared to immediately posttest. The present meta-analysis investigates whether effects of parenting interventions on child disruptive behavior indeed increase after termination of the intervention. In other words, it tests whether the sleeper effect exists.

There are various definitions of the term sleeper effect. The term appears to have first been used in a book by Kagan and Moss (1962), defined as a “stronger relation between a variable measured early and measured late in development than between similar variables measured contemporaneously or more contiguously in time” (p.278). However, there have been researchers that criticized the lack of evidence regarding the sleeper effect (Clarke & Clarke, 1981) and the vagueness of the term sleeper effect because it does not distinguish between a delayed intervention effect and intervention effects that remain stable until later follow-up (Seitz, 1981). It is suggested that positive effects are present immediately after

(4)

4 intervention but that these are just not detectable. Evidence for learning becomes apparent at a later point in development or when a different situation emerges (Gray & McCormick, 2005; Kendall, 2006). More recently, parenting program evaluation studies use the term sleeper effect in many ways and without providing or referring to a definition (e.g. Barlow et al., 2007; Sofronoff et al., 2011; Deković et al., 2010). The vagueness of this phenomenon called sleeper effects of parenting interventions calls for clarity and evidence for its (non)existence.

In the present study, we define sleeper effects as further growth in desired outcomes after immediate intervention termination. This relates to the definitions of Kagan and Moss (1962) as the relation between intervention and outcome measure increases over time. We thus take a somewhat broader approach than those who define sleeper effect as a delayed intervention effect suggesting that no effect may be evident at post intervention. As a result, a non-existing post intervention effect is not a criterion in our definition of a sleeper effect. It could be that a large effect is found at post-test, but that this growth continues, resulting in an even larger effect at follow-up. Sleeper effects thus indicate an increase in desired effects on child behavior, while parents receive no further support from the program.

Reasons to Expect a Sleeper Effect

First, the environment may be relatively slow in responding to, reinforcing and

enhancing small initial changes in behavior of the parent (Weiss, Catron, & Harris, 2000). As parents may be unsure using new parenting skills in the beginning, starting with small

changes in their behavior, children may respond little to it. When parents are reinforced by small responses in their child’s behavior, they continue to show this newly learned skill and use it more often. If this behavior is then reinforced again by the child’s behavior, the positive effect may grow. Patterson’s theory on Coercive Cycles (1982) describes how disruptive child behavior and negative parenting behavior reinforce each other. The parent-child

(5)

5 coercive cycle can start with either the child or the adult making a demand on the other, which leads to arguing, until one of the two becomes most intimidating making the other one give in. In the end both parties are reinforced, either negatively (by the other party

withdrawing) or positively (by getting what they demanded) or both. Parenting interventions aim to break the coercive cycles, by teaching parents to model positive behavior, reinforce positive child behavior and extinguish negative behavior. This mediating effect of improved parenting behavior on the effect of parenting interventions on reduced disruptive child behavior is well established (Dishion et al., 2008; Gardner, Hutchings, Bywater, & Whitaker, 2010). After participation in a parenting intervention, a reversed cycles pattern in which positive parenting behavior and positive child behavior reinforce each other may start. If positive parent and child behavior continue to reinforce each other after intervention termination, this may lead to delayed intervention effects (i.e., a sleeper effect).

Second and relatedly, parents who received the parenting intervention may feel unsure at first about their new skills and about whether they will be able to maintain the gains that they have made. At a later follow-up, they have had time to gain confidence in their new skills and they may recognize the benefits of their skills (e.g., Sofronoff et al., 2011;

Whittingham et al., 2009). As child disruptive behavior is most often reported by the parent, the parent may report with more confidence about reduced disruptive behavior at follow-up than at immediate posttest, which causes a delay in the report of children’s behavior change. Reasons to Expect a Fade-Out Effect

A fade-out effect assumes that short term improvements are apparent immediately after the intervention, but that this effect is not sustained into later life without further support, and even fades out (Gray & McCormick, 2005). The use of booster sessions (e.g. Tuning into Kids; Havighurst, Wilson, Harley, Prior, & Kehoe, 2010) and the use of

(6)

6 sessions (e.g. Triple P group level 4; Sanders, 1999) are indicators of the assumption that families are not always capable of maintaining intervention effects. Also programs that are spread over several years (e.g. Family Check-Up; Dishion et al., 2008) believe that families are not always capable of maintaining intervention effects without further support. Parents, who are fully willing to change, might slowly lose their good intentions and fall back to previous behavior patterns (e.g. Hanish, Hautmann, Eichelberger, Plück, & Döpfner 2010). This may lead to fade-out effects.

Reasons to Expect No Sleeper- or Fade-Out Effect

First, severe disruptive behavior problems in early childhood are known to be persistent through life (e.g. Pingault, Rijsdijk, Zheng, Plomin, & Viding, 2015). Factors on multiple fronts (e.g., temperament, attachment, peer relationships and parenting behavior) influence the development and maintenance of disruptive child behavior (Burke, Loeber, & Birmaher, 2002). Given the persistence and range of factors that influence disruptive child behavior, interventions targeting these multiple fronts are needed to fight it. In most parenting interventions, parents are taught specific skills to create a better environment for the child to interact with (e.g. Incredible Years; Webster-Stratton & Reid, 2004). Though parenting stands out as a key point of influence on children’s behavior (Burke et al., 2002), this is only one front that is targeted. To expect a further reduction in disruptive child behavior after a parenting intervention, without further support, seems to underestimate the persistence and scope of disruptive child behavior.

Second, some previously reported sleeper effects may be the result of methodological artifacts. For example, it may be that finding a sleeper effect is just a result of the random fluctuation, with more risk in studies with small sample sizes (Clarke & Clarke, 1981). For example, in a ten year follow-up of families receiving a preventive parenting intervention it seemed that there were some random fluctuations in significant results, for example an

(7)

7 increase in affective problems for the control group (n = 41) after the age of seven while other aspects of behavior problems decreased (van Aar, Asscher, Zijlstra, Deković, &

Hoffenaar, 2015). Furthermore, minimal treatment or treatment as usual offered to parents in the control condition may lead to temporary benefits, while a manualized parenting

intervention leads to more sustained benefits. As a result, both conditions may show effects immediately post intervention, but only the intervention condition shows effects at follow-up. The effect size, as the difference between both conditions at posttest would then be zero, while the effect size at follow-up would be larger than zero. For example, the parents in the intervention condition Project Support as well as the parents in the minimal contact condition reported reduced disruptive child behavior at immediate posttest, but only the parents in the intervention condition reported reduced disruptive child behavior at follow-up (Jouriles et al., 2009). This resulted in a small effect immediately post intervention and a large effect at follow-up. Finally, it might be a selection effect in that families who participate in later follow-up measurements are more willing and motivated to invest in interventions than families who drop-out between immediate posttest and later follow-up. The participating families may thus report more positive results than drop-out families. This selective drop-out may result in a slightly higher effect size compared to posttest and therefore show a sleeper effect.

Moderating Effects on Sleeper- or Fade-Out Effects

A sleeper- or fade-out effect is a purely descriptive phenomenon, as it could occur for a variety of reasons. Besides the possible differences in causes of a sleeper- or fade-out effect, there may also be differences in the extent to which intervention effects increase or decrease over time. The variability in the increase or decrease over time is therefore probably

(8)

8 First, the level of initial problem behavior of the child may moderate a possible

sleeper- or fade-out effect. Both preventive and treatment programs are included in this meta-analysis. However, prevention and treatment interventions target families with different levels of initial problems and are found to be different in effect (Menting, Orobio de Castro, & Matthys, 2013). Families whose children show clinical levels of problem behavior tend to benefit more from parenting interventions than those with lower levels of problem behavior prior to the intervention (e.g. Lundahl, Risser, & Lovejoy, 2006), also at later follow-up (Leijten, Raaijmakers, Orobio de Castro, & Matthys, 2013). This is not very surprising considering the scope of improvement of families who experience clinical levels of problem behavior versus families whose children do not show that much behavior problems.

Interventions for children with more severe levels of initial problems may show a stronger sleeper effect, because the reversed cycles patterns may be more present in these families. As the families with clinical problems show larger improvements in positive

parenting during intervention period (Lundahl et al., 2006), the child may react more strongly to this positive change which in turn reinforces the mother’s positive parenting. The chain of change may thus be more evident for families with clinical levels of initial problem behavior resulting in a stronger sleeper effect. The initial level of children’s behavior problems is expected to predict a stronger fade-out effect because those families who had more room to improve, may also have more room to fall back into old behavior patterns. Furthermore, parents whose children show clinical levels of problem behavior, more often have depressive symptoms (Gardner et al., 2010), which may cause difficulties in maintaining the newly learned behavior skills until later follow-up.

Second, length of follow-up may moderate a possible sleeper- or fade-out effect. For both a sleeper and a fade-out effect we would expect that for example after three months the effect would not be very large, as the reversed cycles have not become very apparent for a

(9)

9 sleeper effect. Similarly, possible fallback to old behavior patterns of parents may also have not yet become apparent for a fade-out effect. The time span of three months is often used to implement booster sessions (e.g. Kim et al., 2014). With increasing length of follow-up, either the parents are gradually more reinforced, resulting in a sleeper effect, or the parents gradually lose their strength to maintain behavioral changes, resulting in a fade-out effect. Therefore, a longer length of follow-up (i.e. number of months) may result in a larger sleeper- or fade-out effect.

How We Build on Previous Meta-Analyses

To our knowledge, no previous meta-analysis has been performed that tested whether effects of parenting interventions on reduced disruptive child behavior increase over time and thus whether sleeper effects exist. Lundahl et al. (2006) found that mean immediate effects of parenting interventions on reduced child disruptive behavior were d = 0.42 (68 studies) and mean follow-up effects were d = 0.21 (21 studies). Although no significance test was conducted, these results indicate that the effect at follow-up is smaller than immediate post intervention, suggesting a fade-out rather than a sleeper effect. However, their follow-up effect size was based on a subset of studies of the sample of studies analyzed on immediate effects, including non-randomized controlled trials. Another meta-analysis examined both immediate and follow-up effects of preventive parenting programs, but could only include three randomized controlled trials that provided controlled follow-up data because their focus was on parents of children under the age of three (Barlow, Smailagic, Ferriter, Bennett, & Jones, 2010). Other meta-analysis have also included non-controlled follow-up (e.g. Leijten et al., 2013). Our study provides an overview of all published randomized controlled trials in which the controlled design was still intact at follow-up.

(10)

10 The current study aims to shed light on the extent to which sleeper or fade-out effects exist in the effects of parenting interventions on reduced disruptive child behavior. We therefore compared the effects of parenting interventions immediately after intervention termination with effects at later follow-up. To our knowledge, we are the first to make this direct comparison of immediate and follow-up results of parenting interventions on child disruptive behavior in order to examine whether sleeper effects exist. First, we pit two main hypotheses against each other. On the one hand, the effects of parenting interventions on child disruptive behavior at follow-up are larger than the effect immediately post

intervention, i.e., sleeper effects exist. On the other hand, the effects of parenting

interventions on child disruptive behavior at follow-up are smaller than the effect size post intervention, i.e. fade-out effects exist. If both hypotheses are rejected, intervention effects thus remain stable until follow-up. Second, the initial level of problem behavior of the child and the number of months between immediate post intervention and later follow-up are tested as putative moderators of the stability or change in effects. We hypothesize that clinical levels of initial problem behavior lead to a larger sleeper- or fade-out effect than non-clinical levels of initial behavior problems of the child, and that a longer follow-up results in a larger sleeper- or fade-out effect compared to a shorter follow-up.

Method

The study protocol of this multilevel meta-analysis was registered in PROSPERO (CRD42015020193).

Information Sources

We searched for randomized controlled trials of parenting interventions that targeted child disruptive behavior using the electronic databases PsycINFO, Medline, and Eric. The last search was run 6 April 2015. No limit was set on the date of publication. In addition, reference lists of relevant systematic reviews and meta-analyses and experts in the field were

(11)

11 consulted. The search terms included a combination of all variants of the words parenting (1), child (2), disruptive behavior (3) and follow-up (4). An example of the search terms for the PsycINFO database is provided in Appendix 1.

Eligibility Criteria

Nine criteria were used for inclusion. Studies were included if they 1) reported on a parenting intervention aimed at reducing child disruptive behavior, 2) performed a pretest, posttest and a follow-up of at least one month, 3) had a randomized intervention and control condition at pretest, posttest and follow-up composed of at least five participants, 4) were aimed at parents of children aged 1-15 with a maximum mean age of 12, 5) were not specifically aimed at parents of children who were developmentally delayed, 6) involved parenting interventions aimed primarily on parents and consisted of at least one face-to-face meeting, 7) used the same parental report of child disruptive behavior across measurement occasions, 8) reported means and standard deviations of intervention and control condition on pretest, posttest and follow-up, 9) were written in English.

Study Selection

First, titles of the retrieved studies were reviewed by the first author to determine potential eligibility. Second, abstracts and, if necessary, full texts were reviewed by the first author to determine whether they met inclusion criteria. Full texts of those that appeared to meet the criteria were again critically appraised to check if they met inclusion criteria. In the case when there was no full text authors were requested to deliver those.

Data Extraction

Information was extracted from each included trial on: (1) study characteristics (including authors, year and journal of publication, country); (2) characteristics of trial participants (including number of participants, age, gender, ethnicity and clinical/nonclinical levels of initial behavior problems); (3) type of intervention (including program, dose, target

(12)

12 (parent or parent and child) and control condition (no treatment / care as usual / minimal contact); (4) outcome measures (including the instrument used, mean pre-intervention, post-intervention and follow-up scores and their standard deviations, length of follow-up).

If multiple measures of disruptive behavior were reported, we selected the Eyberg Child Behavior Inventory (ECBI, Eyberg & Pincus, 1999) Intensity Scale, followed by the Strengths and Difficulties Questionnaire (SDQ; Goodman, 1997) Conduct Problems Score, and next CBCL Externalizing Scale. Priority was given to the ECBI because this was the most frequent used instrument in our sample (22 out of 42 trials). For trials that did not include the ECBI, alternative measures of disruptive child behavior were used. If multiple follow-up measurements were reported, we selected the longest follow-up period to make use of the maximum length of follow-up data.

Studies were coded by the first author using a data extraction form for study and sample characteristics, intervention procedures and outcome measures. We followed the Lipsey and Wilson (2000) guideline to test reliability of a single coder by letting the coder code the studies again later. Data were therefore re-extracted for all studies again after XX months, and reliability was XX. In order to prevent bias in this estimate, the period between both coding sessions had to be long enough for the coder to not remember the parameters from the first coding session, therefore this is not estimated yet.

Where means and/or standard deviations of the pretest, posttest or follow-up were not available in published reports, study authors were contacted to supply missing information. After two weeks a reminder was sent in which a deadline was stated. Duplicate publications of the same data were avoided by juxtaposing author names, intervention comparisons, sample sizes, and pre-intervention outcomes.

(13)

13 The primary outcome measure was the standardized difference in a continuous score of child disruptive behavior. The current research focused on the posttest – follow-up

controlled design which could be transformed from a pretest-posttest-control design (Morris, 2008). For our analysis, pretest and posttest were substituted by posttest and follow-up. The estimated effect size was the difference between repeated measures of intervention and control condition, divided by the pooled standard deviation at posttest (equation 8, Morris, 2008):

Dppc =

where CP was a bias adjustment based on sample size, and pretest and posttest were substituted by posttest and follow-up (equation 10, Morris, 2008). The pooled standard deviation of intervention and control condition at posttest was used to limit the bias in variance (equation 11, Morris, 2008).

We first estimated pretest-posttest, and pretest-follow-up effect sizes as descriptive effects of parenting interventions on immediate and follow-up reduced disruptive child behavior, for which the same pretest-posttest-controlled formula was used. Next, we tested the hypotheses of the sleeper and fade-out effect by estimating the difference in change between repeated measures of intervention and control condition, divided by the pooled standard deviation at posttest. A positive effect sizes resembled a stronger reduction of disruptive child behavior in the intervention condition compared to the control condition.

The sampling variance was estimated using the correlation between repeated measures, standard deviation and the mean effect size (equation 25, Morris 2008). Because more than half of the studies (52%) used the ECBI Intensity Scale to measure child disruptive

(14)

14 behavior, we used the test-retest reliability of this scale (r = .86, Robinson, Eyberg & Ross, 1980) as the correlation between posttest and follow-up to estimate sampling variance. Risk of Bias in Individual Studies

To ascertain the validity of eligible randomized trials, the first author determined the adequacy of randomization and concealment of allocation; only studies that had an adequate process of randomization were included. Blinding of participants and health care providers was not feasible, all of them knew whether parents received the intervention or not.

Inequality of pretest scores was accounted for in the effect size. Drop-out rates (i.e., proportion of participants for whom the investigators were not able to ascertain outcomes) were coded to provide insight in the reliability of the study.

We hypothesized that study effect sizes would differ according to the methodological quality of the studies, resulting in heterogeneity. Therefore we used a random effects model to analyze the data.

Planned Methods of Analysis

Because two studies reported treatment effects on two parenting interventions, generating multiple effect sizes, the assumption of independency between effect sizes was violated. To account for this dependency in effect sizes, a multilevel approach was used with a Restricted Maximum Likelihood estimation method using the metafor package in R

(Viechtbauer, 2010; Wibbelink & Assink, 2015). A three-level model was estimated which included the sampling variance for each effect size (level 1), variance between effect sizes within a study (level 2) and variance in effect sizes across studies (level 3). The model generates an overall effect size to indicate whether there is a difference in change in parent reported disruptive behavior from immediate post test to later follow-up between intervention and control condition. If the overall effect size from post intervention to follow-up was larger

(15)

15 than zero, this supports the sleeper effect. If it was significantly smaller than zero, this

supports the fade-out effect.

Next, heterogeneity of the first level, i.e. sampling variance, was estimated using the Q-test. This is however not very informative, since we are more interested in the variance between effect sizes within and across studies. Therefore, heterogeneity of the second and third level was estimated by comparing the fit of the original (full) model with the variances randomly estimated to the fit of a model with the variances restricted to zero. If the full and restricted model have an equal fit, the variances are not different from zero. Although this method is the best way to estimate the variances between effect sizes within and across studies, this method is subjected to power (Wibbelink & Assink, 2015). As we included only 42 effect sizes, we also estimated heterogeneity within and between studies using the 75% rule (Hunter & Schmidt, 1990), i.e. there is substantial heterogeneity within and between studies if less than 75% of the variance is attributed to sampling variance. The distribution of variance across the three levels was estimated using the Higgins and Thompson method (2002) as proposed by Cheung (2014), to give insight in how much of the variance is a result of sampling variability (level 1), within-study variance (level 2), and between-study variance (level 3).

We subsequently fitted a 3-level mixed-effects model to identify whether the

moderators length of follow-up and initial problem behavior significantly explained variation in effect sizes within and between studies. Both moderators were separately included in the model to test its effect. The final model was estimated using significant moderators only. Risk of Bias Across Studies

Follow-up studies that produced significant growth of effects may have been more likely to get published than studies without these effects (Dwan et al., 2008). Therefore, we assessed the possibility of publication bias by evaluating a funnel plot and Egger’s test of the

(16)

16 posttest – follow-up effect sizes for asymmetry. Publication bias meta-analytic methods (trim & fill) were used to examine whether the sleeper effect existed when publication bias, as standard defined in meta-analysis, was removed from the dataset by imputing effect sizes to make the dataset more symmetrical.

Results

Our search yielded 40 studies (42 trials). The searches of PsycInfo, Medline, and ERIC databases provided a total of 6546 citations. After adjusting for duplicates 6251 remained. Of these, 4932 studies were discarded because based on the titles these papers clearly did not meet inclusion criteria. Another 1236 studies were discarded after reviewing the abstracts and, if necessary, full-text. Most of them had no controlled follow-up. The full texts of the remaining 96 citations were examined in more detail. It appeared that 34 studies did not meet the inclusion criteria which were not detected in the abstracts: no RCT (k = 2), no controlled follow-up (k = 7), no pretest performed (k = 4), no posttest performed (k = 5), age out of range (k = 4), no face-to-face meeting (k = 3), intervention not primarily targeting parents (k = 2), no maternal report of child behavior at all measurement points (k = 6), not reported in English (k = 1). In addition, some publications reported on an already included trial (k = 9), or the full text of the study was not available (k = 3). We contacted two authors for full texts (the third author could not be traced), one provided the full text. Nineteen authors were contacted with a request to provide means and/or standard deviations. Sixteen authors (73%) responded and of these, nine authors (41%) were able to provide the means and/or standard deviations. Studies for which the required information was not provided (k = 10) were excluded (Figure 1).

Study Characteristics

Most studies (k = 40) were conducted in Western countries. Of those studies reporting ethnicity (62.5%), 17 studies included mainly white families and 9 studies included mainly

(17)

17 ethnic minority families. Most studies (91.2%) included more boys than girls, three studies included only boys. Ten studies included only children who showed clinical levels of initial problem behavior, three studies excluded these children. In total, the included trials covered a length of follow-up of three years. Table 1 provides the other study characteristics and the drop-out rates between pretest and follow-up.

Results of Individual Studies

Means and standard deviations of child disruptive behavior were available from 40 trials which randomized 6232 families to either intervention or control condition. When both mothers and fathers reported on child behavior, the mean of both means and their pooled standard deviation were used. Means, standard deviations, and effect sizes per trial are provided in Table 2 and the posttest to follow-up effects sizes are shown in a forest plot (Figure 2).

Synthesis of Results

The overall size of the effect of parenting intervention, relative to the control

condition, on reduced disruptive child behavior between pretest and immediate posttest was significant and small to moderate (ES =.357, 95% CI [.243, .471], p < .001). The overall effect from pretest to follow-up seemed slightly larger but was also small to moderate (ES = .391, 95% CI [.270, .512], p < .001). More importantly for the purpose of this study, the overall effect from posttest to follow-up was very small and non-significant (ES = .036, 95% CI [-.040, .111], p = .348). This indicates stability in the effects of parenting interventions to reduce disruptive child behavior between immediate post intervention and later follow-up. This means that there is no sleeper effect or fade-out effect.

There was significant heterogeneity between effect sizes across levels (Q(41)= 182.499, p < .001, I2 = 77.53%), which supports our hypothesis that there is more variance between effect sizes than can be expected from sampling bias and that there may be sample

(18)

18 or design characteristics that moderate the main effect. Next, the distribution of variances across levels was estimated. The percentage of sampling variability was 15.42% (level 1), the percentage of variance that could be assigned to differences between effect sizes within studies was 21.41% (level 2), and the percentage of variance in effect sizes that could be attributed to differences in associations between studies was 63.17% (level 3). This shows that there is mainly variance to explain between studies (level 3) and that study characteristics may moderate the main effect.

Severity of children’s initial problem behavior was included as a categorical

moderator (clinical level of problem behavior versus non-clinical versus mixed). The level of children’s initial problem behavior was not a significant moderator (QM(df = 2) = 0.770, p = 0.474). Also, there was still significant residual heterogeneity that was not accounted for by this model (QE(df = 24) = 128.727, p < .001). This indicates that the stability in effects of parenting interventions to reduce disruptive child behavior between immediate post

intervention and later follow-up were similar for trials that included children with or without (or mixed) clinical levels of disruptive child behavior.

Second, it was tested whether the length of follow-up explained some of the heterogeneity in effect sizes. The QM statistic, which indicates whether a moderator is significant, shows that length of follow-up was not a significant moderator (QM(df = 1) = 0.859, p = 0.360). Again, there was still residual heterogeneity that is not accounted for by this model (QE(df = 40) = 180.091, p < .001), indicating that length of follow-up does not significantly explain variances in effect sizes from posttest to follow-up.

Risk of Bias Across Studies

Figure 3 shows the funnel plot with the posttest – follow-up effect sizes on the X-axis and the standard error on the Y-axis. An effect size of zero means that there is no change in children’s disruptive behavior from posttest to follow-up, indicating stability of effect. Effect

(19)

19 sizes at the right side of zero indicate sleeper effects, effect size on the left side of zero

indicate fade-out effects. The plot appears symmetrical, Egger’s regression test showed no significant asymmetry (t = 0.37, df = 40, p = .71). Also, performing the Trim & Fill procedure which imputes missing scores when there is asymmetry, lead to no differences in model fit or effect size; no missing studies were imputed. The symmetry of the plot suggests that there was an equal number of studies that reported positive change between immediate posttest and follow-up effects of parenting interventions and studies that reported negative change

between these effects. Therefore, no form of positive results-bias was detected. However, it seemed that there are few studies that reported effect sizes of zero, suggesting that trials that did not find any change between immediate posttest and follow-up effects of parenting intervention were less often reported.

From the funnel plot there seems to be an outlier on the bottom part of the plot with a very high standard error of .27. This study (Gross et al., 2003) had a very high standard error due to the small sample size (N = 17). We did not exclude this study based on its small sample size. However, we did perform a sensitivity analysis to check whether exclusions of this study would affect the results. Results were the same with and without this study.

Discussion

The present meta-analysis investigated whether there are sleeper effects (i.e., further changes in reduced disruptive child behavior after intervention termination), or fade-out effects (i.e. fall back to previous behavior) of parenting interventions. We estimated the change (versus stability) in child disruptive behavior from immediate posttest to later follow-up between families in parenting interventions and families in control conditions. The 40 included randomized controlled trials show that there is no sleeper effect or fade-out effect: the average effect size of change between immediate posttest and later follow-up is almost zero and non-significant. Therefore, both the hypotheses of sleeper effects and fade-out

(20)

20 effects are rejected. This indicates that levels of children’s disruptive behavior remain stable after intervention termination, at least for the follow-up periods included in this study.

Our finding that intervention effects remain stable does not replicate the finding by Lundahl and colleagues (2006), who found that effect sizes immediately post intervention (ES = .42) seemed larger than effect sizes at later follow-up (ES = .21). This difference could be explained by the fact that we included only randomized controlled trials with controlled follow-up, while Lundahl et al. (2006) included quasi-experimental designs with uncontrolled follow-up. In addition, we calculated effect sizes based on the difference in change from pretreatment to follow-up between intervention and control condition, to account for baseline differences between conditions (c.f. Morris, 2008), whereas Lundahl et al. (2006) calculated effect sizes based on differences in follow-up means. These methodological differences may have resulted in different findings regarding follow-up effect sizes.

The finding that intervention effects remain stable after intervention termination does neither support that families continue to improve after intervention termination, nor that they tend to fall back. A stable and sustained pattern of reduced disruptive child behavior following intervention was expected given the persistence of disruptive behavior problems (Pingault et al., 2015). It seemed unlikely that without further support, these persistent

behavior problems would further reduce. Our results suggest that all the benefits that families get from parenting interventions, at least in terms of reduced disruptive child behavior, are visible immediately after intervention termination. Based on our findings, no further improvements should be expected in the months or years after intervention termination. If further reduction of disruptive behavior is necessary, then additional support may be needed. For example, booster sessions to further improve parenting behavior or additional

interventions targeting other fronts of the child’s life, such as school or the child itself (Burke et al., 2002). The Incredible Years program for example, describes a parent intervention, a

(21)

21 teacher training and a child social skills and problem-solving training (Webster-Stratton & Reid, 2004). Perhaps, these training programs targeting different fronts of the child’s life may all have their share and complement each other in reducing disruptive child behavior

(Webster-Stratton, Reid, & Hammond, 2004).

Since there was substantial variance between the effect sizes regarding reduced disruptive behavior from immediate posttest to later follow-up, the moderating effect of initial disruptive child behavior and length of follow-up was examined to investigate whether these moderators explained the heterogeneity. A higher level of children’s initial disruptive behavior did not predict a larger sleeper- or fade-out effect. This means that families whose children show clinical levels of initial problem behavior and those whose children show non-clinical levels follow an equally stable pattern of behavior development after intervention period. So, although children with more severe initial levels of problem behavior seem to benefit more during intervention period and until one year follow-up (Leijten et al., 2013), this difference seems to be due to the gains made during intervention period and not to a different change after intervention period until later follow-up. Also, length of follow-up did not predict a stronger sleeper- or fade-out effect. We saw similar levels of stability in

intervention effects in trials that included 1 month to 3 year follow-up periods, indicating that it simply does not matter which length of follow-up is investigated.

However, power may have been an issue in detecting moderating effects. With a sample of 42 effect sizes, power is limited to find small moderating effects. Though

heterogeneity is needed for moderator analyses, a large amount of heterogeneity compared to a small amount of heterogeneity may result in an even lower power to detect moderating effects (Hedges & Pigott, 2004). Therefore the absence of significant moderating effects should be interpreted with caution. In this study we chose to only include a single effect size per trial to keep the model as simple as possible. To increase power, future meta-analyses

(22)

22 could include multiple effect sizes per trial, i.e. multiple follow-up measurement occasions per trial or multiple instruments measuring child disruptive behavior.

The presence of the substantial heterogeneity indicates that the outcomes of the individual trials are not consistent, there is variance between these trials that could not be expected from chance. Based on the individual trial effect sizes and their confidence intervals (which are provided in Table 2 and Figure 2), about 70% of the trials indicate no significant change from immediate posttest to follow-up, while the other trials do. Of those that indicate an effect, approximately 60% indicates a positive change (i.e. sleeper effect) and 40% a negative change (i.e. fade-out effect). It might be that in our meta-analyses, the sleeper effects and fade-out effects of individual trials have canceled each other out, which results in an average effect of zero. We were unable to test which factors were causing these differences. No striking differences between trials that indicate a sleeper effect and trials that indicate a fade-out effect are noticed by eye. However, trials that show a sleeper effect tended to have somewhat lower effect sizes at immediate post test (e.g., smaller than .50), whereas trials that show a fade-out effect tended to have somewhat larger effect sizes (e.g., larger than .50). It might be that chances for additional intervention benefits are larger when initial intervention benefits are smaller. This might be related to the scope of improvement for families, i.e. those families that already made a lot of improvement have less room left to improve while the families who did not improve yet have more room left to improve. Though we suggested that the initial level of behavior problems would predict a sleeper effect, it might be that the initial intervention benefits predict a further improvement after intervention period. Future meta-analyses investigating the effects of parenting interventions on child disruptive behavior are encouraged to investigate a possible moderating effect of initial intervention benefits on further development of child behavior.

(23)

23 A limitation of the present meta-analysis is missing data in the original trials. Drop-out rates are always a threat to follow-up studies. The included trials reported drop-Drop-out rates up to 42% which could have led to more positive results. Another threat for meta-analyses in general is publication bias (Dwan et al., 2008). Though this is not observed in this study from the funnel plot and statistical tests, there still may be an underestimation of “no effect”, i.e. an absence of change in children’s disruptive behavior from immediate posttest to later follow-up indicating stability of effects. In order to reduce publication bias, trial registration is highly recommended in which is stated what and when follow-up occasions are planned. This way it could be detected which follow-up data are measured but not published and authors could be requested for the unpublished data for performing a meta-analysis. Finally and importantly, our findings cover a three year follow-up period, with most data available for one year follow-up. Therefore these results indicate the short term follow-up. More randomized controlled trials covering long term effects are needed to conduct meta-analysis on possible long term sleeper- or fade-out effects.

A strength of this study is that it is the first to test change versus stability in parenting intervention effects on child disruptive behavior from immediate intervention termination to later follow-up. While other meta-analyses focused on the follow-up effects which includes the change during intervention period (e.g. Lundahl et al., 2006), we specifically focused on the process of change after intervention period. Therefore these results provide more detailed insight in the development of disruptive child behavior after intervention has terminated. Furthermore, it is the first to include only randomized controlled trials with follow-up data for which the controlled design was still intact, in contrast to for example Leijten et al. (2013) who also included follow-up data for which a controlled counterpart was not available due to waitlist assignment. As change in child behavior may occur simply because the child grows up (Pingault et al., 2015), the effect of intervention can

(24)

24 only be determined if the development of children in families who received the intervention are compared to the development of children in similar families who did not receive the intervention. By including randomized controlled trials for which the controlled design at follow-up was still intact, our study analyzed the most reliable data in current field of research on parenting interventions.

The present meta-analysis found no evidence for a sleeper effect or a fade-out effect. Based on our findings, it thus seems ungrounded to expect any further reduction of disruptive child behavior in the months or years after intervention termination. In contrast, this meta-analysis shows that the effects of parenting interventions are stable between immediate post intervention and further follow-up. Although this is an average effect that should not be generalized to individual studies haphazardly, this is a hopeful conclusion for trials that are unable to incorporate further follow-up measures in their design (e.g. due to the use of a waitlist control condition). Therefore, researchers who did not conduct follow-up measurements may now have reason to refer to possible sustained effects, rather than possible sleeper effects.

(25)

25 References

Aar, J. V. van, Asscher, J. J., Zijlstra, B. J. H., Deković, M., & Hoffenaar, P. J. (2015). Changes in parenting and child behavior after the home-start family support program: A 10 year follow-up. Children and Youth Services Review, 53, 166-175.

doi:10.1016/j.childyouth.2015.03.029

Barlow, J., Davis, H., McIntosh, E., Jarrett, P., Mockford, C., & Stewart-Brown, S. (2007). Role of home visiting in improving parenting and health in families at risk of abuse and neglect: results of a multicenter randomized controlled trial and economic evaluation. Archives of disease in childhood, 92, 229-233.

doi:10.1136/adc.2006.095117

Barlow, J., Smailagic, N., Ferriter, M., Bennet, C., & Jones, H. (2010). Group-based parent- training programmes for improving emotional and behavioral adjustment in children from birth to three years old. Cochrane Database of Systematic Reviews, 3,

CD003680. doi:10.1002/14651858.CD003680.pub2

*Bodenmann G., Cina, A., Ledermann T., & Sanders M. R. (2008). The efficacy of the triple P-positive parenting program in improving parenting and child behavior: A

comparison with two other treatment conditions. Behaviour Research &

Therapy, 46(4), 411-427. doi:10.1016/j.brat.2008.01.001

*Breitenstein, S. M., Gross, D., Fogg, L., Ridge, A., Garvey, C., Julion, W., & Tucker, S. (2012). The chicago parent program: Comparing 1-year outcomes for African

American and Latino parents of young children. Research in Nursing & Health, 35(5), 475-489. doi:10.1002/nur.21489

Burke, J. D., Loeber, R., & Birmaher, B. (2002). Oppositional defiant disorder and conduct disorder: A review of the past 10 years, part II. Journal of American Academy of

(26)

26

Child & Adolescent Psychiatry, 41(11), 1275-1293.

doi:10.1097/01.CHI.0000024839.60748.E8

Cheung, M. W. L. (2014). Modeling dependent effect sizes with three-level meta-analyses: A structural equation modeling approach. Psychological Methods, 19, 211–229.

doi:10.1037/a0032968

Clarke, A. D. B. & Clarke, A. M. (1981). “Sleeper effects” in development: Fact or Artifact?

Developmental review, 1, 344-360. doi:10.1016/0273-2297(81)90030-7

*Cowan, P. A., Cowan, C. P., Pruett, M. K., Pruett, K., & Wong, J. J. (2009). Promoting fathers' engagement with children: Preventive interventions for low-income families.

Journal of Marriage and Family, 71(3), 663-679.

doi:10.1111/j.1741-3737.2009.00625.x

* Dawe S, & Harnett P. (2007). Reducing potential for child abuse among methadone- maintained parents: Results from a randomized controlled trial. Journal of Substance

Abuse Treatment, 32(4), 381-390. doi:10.1016/j.jsat.2006.10.003

Deković, M., Asscher, J. J., Hermanns, J., Reitz, E., Prinzie, P., & van den Akker, A. L. (2010). Tracing changes in families who participated in the Home-Start parenting program: Parental sense of competence as mechanism of change. Preventive Sciences

11, 263-274. doi:10.1007/s11121-009-0166-5

*Dishion, T. J., Connell, A., Weaver, C., Shaw, D., Gardner, F., & Wilson, M. (2008). The family check-up with high-risk indigent families: preventing problem behavior by increasing parents’ positive behavior support in early childhood. Child Development,

79(5), 1395-1414. doi:10.1111/j.1467-8624.2008.01195.x.

Dwan, K, Altman, D. G., Arnaiz, J. A., Bloom, J., Chan, A., Cronin, E., … Williamson, P. R. (2008). Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS ONE, 3(8): e3081. doi:10.1371/journal.pone.0003081 Eyberg, S., & Pincus, D. (1999). Eyberg Child Behavior Inventory and Sutter-Eyberg Student

(27)

27

Behavior Inventory - Revised: Professional manual. Odessa, FL: Psychological

Assessment Resources.

*Fabiano, G. A., Pelham, W. E., Cunningham, C., Yu, J., Gangloff, B., Buck, M., … Gera, S. (2012). A waitlist-controlled trial of behavioral parent training for fathers of children with ADHD. Journal of Clinical Child and Adolescent Psychology, 41(3), 337-345. doi:10.1080/15374416.2012.654464

* Forgatch, M. S., & DeGarmo, D. S. (1999). Parenting through change: an effective

prevention program for single mothers. Journal of Consulting & Clinical Psychology,

67(5), 711-724. doi:10.1037/0022-006X.67.5.711

Gardner, F., Hutchings, J., Bywater, T., & Whitaker, C. (2010). Who benefits and how does it work? Moderators and mediators of outcome in an effectiveness trial of a parenting intervention. Journal of Clinical Child and Adolescent Psychology, 39(4), 568-580. doi:10.1080/15374416.2010.486315

Goodman, R. (1997). The strengths and difficulties questionnaire: A research note. Journal of

Child Psychology and Psychiatry, 38, 581–586.

doi:10.1111/j.1469-7610.1997.tb01545.x

Gray, R., & McCormick, M. C. (2005). Early childhood intervention programs in the US: recent advances and future recommendations. The Journal of Primary Prevention,

26(3), 259-275. doi:10.1007/s10935-005-3600-x

*Gross, D., Fogg, L., & Tucker, S. (1995). The Efficacy of Parent Training for Promoting Positive Parent—Toddler Relationships. Research in Nursing & Health, 18(6), 489-499. doi: 10.1002/nur.4770180605

*Gross, D., Fogg, L., Webster-Stratton, C., Garvey, C., Julion, W., & Grady, J. (2003). Parent training of toddlers in day care in low-income urban communities. Journal of

(28)

28 *Hanisch, C., Hautmann, C., Eichelberger, I., Pluck, J., & Dopfner, M. (2010). Long-term

clinical significance of the prevention programme for externalizing problem behaviour (PEP). Verhaltenstherapie, 20(4), 265-273. doi:10.1159/000322044

*Havighurst S. S., Wilson, K. R., Harley, A. E., Prior, M. R., & Kehoe, C. (2010). Tuning in to kids: Improving emotion socialization practices in parents of preschool children -findings from a community trial. Journal of Child Psychology and Psychiatry, 51(12), 1342-1350. doi:10.1111/j.1469-7610.2010.02303.x

Hedges, L. V. & Pigott, T. D. (2004). The power of statistical tests for moderators in meta- analysis. Psychological Methods, 9(4), 426-445. doi:10.1037/1082-989X.9.4.426 Higgins, J. P. T., & Thompson, S. G. (2002). Quantifying heterogeneity in a meta-analysis.

Statistics in Medicine, 21, 1539–1558. doi:10.1002/sim.1186

Hunter, J. E., & Schmidt, F. L. (1990). Methods of meta-analysis: Correcting error and bias

in research findings. Newbury Park, CA: Sage.

*Jouriles, E.N., McDonald, R., Rosenfield, D., Stephens, N., Corbitt-Shindler, D., & Miller, PC. (2009). Reducing conduct problems among children exposed to intimate partner violence: A randomized clinical trial examining effects of project support. Journal of

Consulting and Clinical Psychology, 77(4), 705-717. doi:10.1037/a0015994

*Jouriles, E. N., McDonald, R., Spiller, L., Norwood, W. D., Swank, P. R., Stephens, N., . . . Buzy, W. M. (2001). Reducing conduct problems among children of battered women.

Journal of Consulting and Clinical Psychology, 69(5), 774-785.

doi:10.1037//0022.0006X.69.5.774

Kagan, J. & Moss, H. A. (1962). Birth to Maturity. A study in psychological development. New York: John Wiley & Sons, Inc.

(29)

29 and adolescent treatments. Journal of Consulting and Clinical Psychology, 66(1), 19-36. doi:10.1037/0022-006X.66.1.19

Kendall, P. C. (2006). Child and adolescent therapy: Cognitive-behavioral procedures –

third edition. New York: The Guilford Press.

*Kim, E., Cain, K. C., & Webster-Stratton, C. (2008). The preliminary effect of a parenting program for Korean American mothers: A randomized controlled experimental study. International Journal of Nursing Studies, 45(9), 1261-1273.

doi:10.1016/j.ijnurstu.2007.10.002

*Kim, E., Cain, K., Boutain, B., Chun, J., Kim, S., & Im, H. (2014). Pilot study of the Korean parent training using a partial group-randomized experimental study. Journal of Child

and Adolescent Psychiatric Nursing, 27, 121-131. doi:10.1111/jcap.12071

*Kjøbli, J., Bjørnebekk, G. (2013). A randomized effectiveness trial of brief parent training: Six-month follow-up. Research on Social Work Practice, 23(6), 603-10; 612.

doi:10.1177/1049731513492860

*Kjøbli, J., Hukkelberg, S., & Ogden, T., (2013). A randomized trial of group parent

training: Reducing child conduct problems in real-world settings. Behaviour Research

& Therapy, 51(3), 113-121. doi:10.1016/j.brat.2012.11.006

*Kleefman, M., Jansen, D. E., & Reijneveld, S. A. (2011). The effectiveness of Stepping Stones Triple P: the design of a randomised controlled trial on a parenting programme regarding children with mild intellectual disability and psychosocial problems versus care as usual. BioMed Central public health, 11, 676. doi: 10.1186/1471-2458-11-676 *Lavigne, J. V., Lebailly, S. A., Gouze, K. R., Cicchetti, C., Pochyly, J., Arend, R., …

Binns, H. J. (2008). Treating oppositional defiant disorder in primary care: a comparison of three models. Journal of Pediatric Psychology, 33(5), 449-461. doi:10.1093/jpepsy/jsm074

(30)

30 Leijten, P., Raaijmakers, M. A. J., Orobio de Castro, B., & Matthys, W. (2013). Does

socioeconomic status matter? A meta-analysis on parent training effectiveness for disruptive child behavior. Journal of Clinical Child & Adolescent Psychology, 42(3), 384-392, doi:10.1080/15374416.2013.769169

Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis. Thousand Oaks: SAGE. *Lowell, D. I., Carter, A. S., Godoy, L., Paulicin, B., & Briggs-Gowan, M. J. (2011). A

randomized controlled trial of child FIRST: A comprehensive home-based intervention translating research into early childhood practice. Child

Development, 82(1), 193-208. doi:10.1111/j.1467-8624.2010.01550.x

Lundahl, B., Risser, H. J., & Lovejoy, M. C. (2006). A meta-analysis of parent training: Moderators and follow-up effects. Clinical Psychology Review, 26, 86-104. doi:10.1016/j.cpr.2005.07.004

*Magen, R. H., & Rose, S. D. (1994). Parents in groups: Problem solving versus behavioral skills training. Research on Social Work Practice, 4(2), 172-191.

doi:10.1177/104973159400400204

*Maguin, E. (1994). The path to alcohol problems through conduct problems: A family-based approach to very early intervention with risk. Journal of Research on

Adolescence, 4(2), 249-269. doi:10.1207/s15327795jra0402_5

*Malti, T., Ribeaud, D., & Eisner, M. P. (2011). The effectiveness of two universal preventive interventions in reducing children's externalizing behavior: A cluster randomized controlled trial. Journal of Clinical Child and Adolescent

Psychology, 40(5), 677-692. doi:10.1080/15374416.2011.597084

*McCabe, K., Yeh, M., Lau, A., & Argote, C. B. (2012). Parent-child interaction therapy for Mexican Americans: Results of a pilot randomized clinical trial at

(31)

31 McCart, M. R., Priester, P. E., Davies, W. H., & Azen, R. (2006). Differential effectiveness

of behavioral parent-training and cognitive-behavioral therapy for antisocial youth: A meta-analysis. Journal of Abnormal Child Psychology, 34(4), 527-543.

doi:10.1007/s10802-006-9031-1

*Mejia, A., Calam, R., & Sanders, M. R. (2015). A pilot randomized controlled trial of a brief parenting intervention in low-resource settings in panama. Prevention Science, 16(5), 707-17. doi:10.1007/s11121-015-0551-1

Menting, A. T. A., Orobio de Castro, B., & Matthys, W. (2013). Effectiveness of the

Incredible Years parent training to modify disruptive and prosocial child behavior: A meta-analytic review. Clinical Psychology Review, 33, 901-913.

doi:10.1016/j.cpr.2013.07.006

Morris, S. B., (2008). Estimating effect sizes from pretest-posttest-control group designs.

Organizational Research Methods, 11, 364-386. doi:10.1177/1094428106291059

*Niccols, A. (2009). Immediate and short-term outcomes of the 'COPEing with toddler behaviour' parent group. Journal of Child Psychology and Psychiatry, 50(5), 617-626. doi:10.1111/j.1469-7610.2008.02007.x

Patterson, G. R. (1982). A social learning approach to family intervention, vol. 3: Coercive

family process. Eugene, Or : Castalia Pub. Co.

*Perrin, E. C., Sheldrick, R. C., McMenamy, J. M., Henson, B. S., & Carter, A. S. (2014). Improving parenting skills for families of young children in pediatric settings: A randomized clinical trial. JAMA Pediatrics, 168(1), 16 - 24.

doi:10.1001/jamapediatrics.2013.2919

(32)

32 dynamic genome: Evidence of genetic influences on increases and decreases in

conduct problems from early childhood to adolescence. Scientific Reports, 5: 10053. doi:10.1038/srep10053

*Reedtz, C., Handegard, B. H., & Morch, W. T. (2011). Promoting positive parenting practices in primary pare: Outcomes and mechanisms of change in a randomized controlled risk reduction trial. Scandinavian Journal of Psychology, 52(2), 131-137. doi:10.1111/j.1467-9450.2010.00854.x

Robinson, E. A., Eyberg, S. M., & Ross, A. W. (1980). The standardization of an inventory of child conduct problem behaviors. Journal of Clinical Psychology, 9(1), 22-28. doi:10.1080/15374418009532938.

*Rushton, A., Monck, E., Leese, M., McCrone, P., & Sharac, J. (2010). Enhancing adoptive parenting: A randomized controlled trial. Clinical Child Psychology &

Psychiatry, 15(4), 529-542. doi:10.1177/1359104510365041

Sanders, M. R. (1999). Triple P – Positive parenting program: Towards an empirically validated multilevel parenting and family support trategy for the prevention of behavior and emotional problems in children. Clinical Child and Family Psychology

Review, 2(2), 71-90. doi:10.1023/A:1021843613840

*Scott, S., Sylva, K., Kallitsoglou, A., & Ford, T. (2014). Which type of parenting

programme best improves child behaviour and reading? Follow-up of the Helping Children Achieve trial. London: Nuffield Foundation.

Seitz, V. (1981). Intervention and sleeper effects: A reply to Clarke and Clarke.

Developmental Review, 1, 361-373. doi:10.1016/0273-2297(81)90031-9

(33)

33 of a family-centered approach to the prevention of early conduct problems: 2-year effects of the family check-up in early childhood. Journal of Consulting & Clinical

Psychology, 74(1), 1-9. doi:10.1037/0022-006X.74.1.1

*Sheeber, L. B., & Johnson, J. H. (1994). Evaluation of a temperament-focused, parent- training program. Journal of Clinical Child Psychology, 23(3), 249-259.

doi:10.1207/s15374424jccp2303_3

*Sigmarsdóttir, M., Degarmo, D. S., Forgatch, M. S., & Gumundsdóttir, E. V. (2013). Treatment effectiveness of PMTO for children's behavior problems in Iceland: Assessing parenting practices in a randomized controlled trial. Scandinavian Journal

of Psychology, 54(6), 468-476. doi:10.1111/sjop.12078

Sofronoff, K., Jahnel, D., & Sanders, M. (2011). Stepping stones Triple P seminars for parents of a child with a disability: A randomized controlled trial. Research in

Developmental Disabilities, 32, 2253-2262. doi:10.1016/j.ridd.2011.07.046

*Somech, L. Y., & Elizur, Y. (2012). Promoting self-regulation and cooperation in pre- kindergarten children with conduct problems: A randomized controlled trial. Journal

of the American Academy of Child & Adolescent Psychiatry, 51(4), 412-422.

doi:10.1016/j.jaac.2012.01.019.

*Sonuga-Barke, E. J., Daley, D., Thompson, M., Laver-Bradbury, C., & Weeks, A. (2001). Parent-based therapies for preschool attention-deficit/hyperactivity disorder: a randomized, controlled trial with a community sample. Journal of the American

Academy of Child and Adolescent Psychiatry, 40(4), 402-408.

doi:10.1097/00004583-200104000-00008

(34)

34 Impact of a general practice based group parenting programme: Quantitative and qualitative results from a controlled trial at 12 months. Archives of Disease in

Childhood, 89(6), 519-525. doi:10.1136/adc.2003.028365

*Thompson, M. J., Laver-Bradbury, C., Ayres, M., Le Poidevin, E., Mead, S., Dodds, C., . . . Sonuga-Barke, E. J. (2009). A small-scale randomized controlled trial of the revised new forest parenting programme for preschoolers with attention deficit hyperactivity disorder. European Child and Adolescent Psychiatry, 18(10), 605-616.

doi:10.1007/s00787-009-0020-0

Viechtbauer, W. (2010). Conducting meta-analyses in R with the metafor package. Journal of

Statistical Software, 36, 1–48.

Webster-Stratton, C., & Reid, M. J. (2004). Strengthening social and emotional competence in young children – the foundation for early school readiness and success. Infants and

Young Children, 17(2), 96-113.

Webster-Stratton, C., Reid, M. J., & Hammond, M. (2004). Treating children with early-onset conduct problems: Intervention outcomes for parent, child, and teacher training.

Journal of Clinical Child & Adolescent Psychology, 33(1), 105-124.

doi:10.1207/S15374424JCCP3301_11

Weiss, B., Catron, T., & Harris, V. (2000). A 2-year follow-up of the effectiveness of traditional child psychotherapy. Journal of Consulting and Clinical Psychology,

68(6), 1094-1101. doi:10.1037//0022-006X 68.6.1094

Whittingham, K., Sofronoff, K., Sheffield, J., & Sanders, M. R. (2009). Stepping Stones Triple P: An RCT of a parenting program with parents of a child diagnosed with an autism spectrum disorder. Journal of Abnormal Child Psychology, 37, 469-480. doi:10.1007/s10802-008-9285-x

Wibbelink, C. J. M. & Assink, M. (2015). Manual for conducting a three-level meta-analysis in R. Manuscript in preparation.

(35)

35 *Williamson, A. A., Knox, L., Guerra, N. G., & Williams, K. R. (2014). A pilot randomized

trial of community-based parent training for immigrant Latina mothers. American

Journal of Community Psychology, 53(1-2), 47-59. doi:10.1007/s10464-013-9612-4 Note. References marked with an asterisk indicate studies included in the analyses.

(36)

36 Appendix 1.

Search terms for PsycINFO

1): parenting/ OR parents/ OR parental attitudes/ OR parent child communication/ OR

parental expectations/ OR parental involvement/ OR parenting skills/ OR exp parenting style/ OR parent child relations/ OR caregivers/ OR childrearing practices/ OR childrearing

attitudes/ OR father child communication/ OR father child relations/ OR mother child communication/ OR mother child relations/ OR (parenting OR parental OR parent behavio* OR parent child communication* OR parent child relation* OR parent child interaction* OR caregiv* OR care giv* OR caretak* OR care tak* OR child raising OR childraising OR child rearing OR childrearing OR father child relation* OR father child communication* OR father child interaction* OR fathering OR mother child relation* OR mother child communication* OR mother child interaction* OR mothering OR upbringing).ti,ab,id.

2): behavior problems/ OR acting out/ OR aggressive behavior/ OR antisocial behavior/ OR attention deficit disorder with hyperactivity/ OR conduct disorder/ OR explosive disorder/ OR externalization/ OR oppositional defiant disorder/ OR rebelliousness/ OR tantrums/ OR (((defiant OR disruptive OR dysfunctional* OR explosiv* OR maladaptiv* OR problem*) ADJ3 (behavio* OR disorder*)) OR acting out OR aggress* OR antisocial OR ADHD* OR behavi* difficult* OR conduct disorder* OR externali* OR hyperactiv* OR misbehavio* OR misconduct OR tantrum*).ti,ab,id.

3): (infancy 2 23 mo OR preschool age 2 5 yrs OR school age 6 12 yrs).ag. OR (infan* OR baby* OR babies OR toddler* OR preschool* OR child* OR kid OR kids OR prepubescen* OR prepuber* OR teen* OR young* OR youth* OR girl* OR boy*).ti,ab,id.

4): followup studies/ OR intervention/ OR early intervention/ OR family intervention/ OR group intervention/ OR parent training/ OR program evaluation/ OR followup study.md. OR

(37)

37 (follow-up* OR followup OR sleeper effect* OR triple P OR incredible years* OR parent child interaction therapy OR PCIT OR parent* management training OR parent* effect* training).ti,ab,id,tm.

(38)

38

Table 1.

Characteristics of Included Studies.

Source Program Sample Size IC / CC Mean age (age range) Boys (%) Initial problem behavior Time between posttest and follow-up (months) Type of control Drop-out (%)

Bodenmann (2008) Triple P level 4 50/50

6.55 (2.0 – 12.0)

52.0 Non-clinical 12.0 WL 10.7

Breitenstein (2012) Chicago parent program 267/283

2.81 (2.0 – 4.0)

54.4 - 12.0 WL 13.3

Cowan (2009) Couples program 139/130

2.25 (0.0-7.0)

- - 9.0 MC 28.3

Dawe (2007) Parents under pressure 22/23

3.80 (2.0-8.0)

60.9 - 6.0 MC 11.1

Dishion (2008) Family Check-UP 367/364

2.35 (2.0-2.1)

50.5 Non-clinical 12.0 WL 15.3

(39)

39

(6.0-12.0) Forgatch (1991) Parenting through change 153/85 7.80

(6.1-10.4)

100 - 6.0 WL 34.0

Gross (1995) Parent training program 11/6

2.00 (2.0-3.0)

83.0 Clinical 3.0 WL 32.0

Gross (2003) Incredible Years 75/59

- (2.0-3.0)

- mixed 12.0 WL 21.2

Hahlweg 2010 Triple P, level 3 158/61

4.50 (2.6-6.0) 51.0 Mixed 24.0 WL 1.4 Hanish (2010) PEP 60/65 4.16 (3.0-7.0) - Clinical 30.0 WL 42.4

Havighurst (2010) Tuning in to kids 106/110

4.69 (3.8-5.7) 52.3 - 6.0 WL 12.0 Jouriles (2001) Multicomponent family intervention 18/18 5.67 (4.0-9.0) 72.2 clinical 8.0 MC -

Jouriles (2009) Project Support 32/34

- (4.0-9.0)

(40)

40

Kim (2008) Incredible Years 12/21

6.00 (3.0-8.0)

62.7 - 12.0 WL 27.3

Kim (2014)

Korean parent training

program 31/27

5.51

(3.0-8.0) - - 3.0 WL 37.9

Kirby (2014) Grandparent Triple P 28/26

4.88 (2.0-9.0)

60.7 Mixed 6.0 CAU 9.3

Kjøbli, Bjørnebekk (2013)

Brief Parent Training 108/108

7.28 (3.0-12.0) 68.1 Mixed 6.0 CAU 19.9 Kjølbi, Hukkelberg (2013) PMTO 72/65 8.56 (2.0-12.0) 63.5 Mixed 6.0 WL 10.2

Kleefman (2011) Triple P, Stepping stones 111/98

9.79 (5.0-12.0)

57.9 Clinical 6.0 CAU 30.1

Lavigne (2008) Incredible Years 86/31

4.60 (3.0-6.1)

53.0 Clinical 12.0 MC 17.1

Lowell (2011) Child First 78/79

1.54 (0.5-3.0)

44.0 Mixed 6.0 CAU 25.5

(41)

41

(5.0-11.0) Behavioral Skills training 19/19 7.02

(5.0-11.0)

70.0 - 3.0 WL -

Maguin (1994)

Program based on OSLC protocol

29/23

4.40 (3.0-6.0)

100 Mixed 6.0 WL 57.0

Malti (2011) Triple P, level 4 339/356

7.45 (---)

52.0 - 36.0 WL 18.0

McCabe (2012) PCIT, GANA 21/18

4.40 (3.0-7.0)

71.0 - 15.9 CAU 15.4

Mejia (2015) Triple P, level 3 54/54

8.49 (3.0-12.0) 70.0 mixed 6.0 WL 29.6 Niccols (2009) CWTB 49/30 2.00 (1.0-3.0) 59.5 Mixed 1.0 WL 10.1

Perrin (2014) Incredible Years, adapted 89/61

2.80

(1.8-3.5) 62.0 Clinical 12.0 WL 18.7

Reedtz (2011) Incredible Years, adapted 89/97

3.89 (2.0-8.0)

(42)

42

Rushton (2010)

Cognitive behavioral and educational approach

19/18

6.58 (1.5-10.0)

46.0 Clinical 6.0 CAU 2.7

Scott (2014) Incredible Years 55/54

6.10

(5.0-7.0) 56.1 Mixed 12.0 CAU 30.0

Shaw (2006) Family Check-Up 60/60

2.01 (1.4-2.3) 100 Mixed 12.0 WL 23.3 Sheeber (1994) Temperament-based parent-training 20/20 4.00 (3.0-5.0) 60.0 - 2.0 WL 12.2 Sigmarsdóttir (2013) PMTO 51/51 8.02 (5.0-12.0) 72.6 Clinical 9.0 CAU 11.8 Somech (2012) Hitkashrut 140/69 4.05 (2.7-5.3) 78.0 Clinical 12.0 MC 40.2 Sonuga-Barke (2001) Parent Training 30/20 - (3.0-4.0) 61.5 Mixed 3.0 WL -

Parent Counseling and Support

28/20

- (3.0-4.0)

61.5 Mixed 3.0 WL 9.0

Referenties

GERELATEERDE DOCUMENTEN

To demonstrate how NLP tools can be combined using nlppln, we show what need to be done to create a pipeline that removes named entities from a directory of

Ook geeft het voor sommigen een voorkeur voor formele zorg; een respondent gaf aan om deze reden liever hulp te krijgen van een onbekende en een aantal anderen

A two-level full factorial design of experiments (DoEs) with a repetitive screw setup was carried out to measure the effect of specific feed load (SFL), liquid-to-solid ratio (L/S),

We outline the main challenges of teaching a large and heterogeneous population of non-computer science students about data science and how we addressed them, as well as a

The partial differential equations governing the problem under consideration were transformed by a special form of the Lie symmetry group transformations, that is, a one-

The corresponding net gain are extracted from the SE values using the measured coupler loss, and the measured absorption plus propagation losses [Fig.. The signal is input through

Up to now we have discussed only hydrogen and oxygen bubbles that are formed on different electrodes when a negative or positive voltage is applied to the working electrode..

The following aspects of FE analysis of sheet metal forming are carefully examined: material modelling, contact conditions, element type, unloading method, time integration scheme