I came across this paper when looking for examples of interventions that adopted a community-engagement model. The study has many positive features, but I also found some issues that would benefit from clarification. The paper reports findings from a project that aimed to evaluate a Video-Coaching intervention (FIND) for families involved in Early Head Start in the Denver region. The project was preregistered on Open Science Framework https://osf.io/ecswy.
It is stated: “Given that FIND specifically focuses on increasing the quantity and quality of back and forth interactions, we hypothesized that children in the FIND intervention group would show significant gains in expressive and receptive language compared to their peers in the control group; these hypotheses were pre-registered prior to data analysis via the Open Science Framework”.
This is a very ambitious project with a difficult-to-engage population of chronically stressed families, and it is not surprising that everything did not go to plan. Despite an admirable attempt to adopt a community-engagement model, and a 7 year time-span, there were problems with recruitment that led to target numbers going unmet, and 34% of participants who were enrolled in the study failed to complete it. It is also noted that the ‘per protocol’ analysis (as opposed to Intention to Treat analysis) was not ideal. The authors note: “Although PP analysis provides an accurate estimate of the true intervention efficacy, it was likely to induce an exaggerated treatment effect as it did not take real-life situations, such as participant drop-out and deviation from the protocol, into consideration.” Basically, if drop-outs are nonrandom, we can no longer assume that the groups with outcome data are comparable.
In addition, there are differences between the preregistration and the reported analyses. The preregistration describes a power analysis based on comparison between 3 groups of 60 (180 families), but it also states that families were assigned to one of two groups – which is what is described in the current paper. The power to detect a medium effect (Cohen’s d of .5) is not reported for the actual sample of 2 groups, of 54 and 37 respectively.
It is important to know why some outcome variables that were preregistered were not reported, as this affects the interpretation of the variables that were reported. It’s clear that the focus of this paper is just on Child Outcomes, (Hypothesis 4), but even if we restrict consideration to Hypothesis 4a, only one of the three instruments, PLS-5 is reported, with Early Social-Communication Scales (ESCS), Conversational Turns Scale not being mentioned. We need to know the number of outcome measures that were analysed, and how correction for multiple comparisons was handled.
The Preregistration also describes a number of measures of parental communication that are posited to act as mediators of intervention effects. The Discussion flags up issues around mechanism of action of intervention that could be explored with these measures, but does not state if such analyses are planned with the existing dataset.
The Preregistration also states: “Initial analyses will include repeated measures ANOVAs, with the time factor reflecting change from pre-to-post assessment, and Group x Time interactions reflecting differences in change over time between groups. Simple effects will be examined for significant Group x Time interactions to examine the direction of change within each group. “ These analyses were not presented in the current paper, which instead reports multilevel structural equation models (MSEM) using a Bayesian estimation approach with Markov Chain Monte Carlo methods. The authors note that this approach is robust to distribution biases and “granted higher statistical power to detect intervention effects”, but no power analysis is presented.
The Preregistration stated: “all relevant pre-processing scripting and procedures used will be documented and shared via OSF”. It is not clear if this has been done. It is particularly valuable to include scripts and data when complex analyses are used that may not be familiar to many readers.
While it is clear that a huge amount of work has gone into this study, which had laudable intentions of evaluating an intervention to improve children’s lives, the summary conclusions of the authors do seem over-stated. It is claimed “Overall, the current study provides the first evidence that the FIND intervention has a significant positive impact on child outcomes”, and they go on to say: “This evidence that FIND promotes positive language development among children from lower-SES backgrounds has significant implications for the field”. Further, they state that “The current study’s findings provide important evidence that reciprocity-based parenting interventions can buffer the negative effects of low-SES environments on child language outcomes.” Before accepting those conclusions, it would be important to see all data for all the outcome measures originally proposed for testing Hypothesis 4a, as well as analysis to test whether intervention effects are mediated by parental behaviors.
Attach files by dragging & dropping, selecting them, or pasting from the clipboard. Uploading your files… We don’t support that file type. with a PNG, GIF, or JPG. Yowza, that’s a big file. with a file smaller than 1MB. This file is empty. with a file that’s not empty. Something went really wrong, and we can’t process that file.
Comment must be at least 15 characters.