Fast track intervention program




















Academic Progress At the end of grade 3, there were no significant differences between the groups on reading or on grades for language arts or mathematics.

Child Social Competence At the end of grade 3, there were no significant effects of intervention on the sociometric measures of peer social preference and prosocial behavior.

Factors - Placeholder. Factors - Protective. Presence and involvement of caring, supportive adults. Good relationship with peers.

Opportunities for prosocial school involvement. Effective parenting. Social competencies and problem-solving skills. Perception of social support from adults and peers. Student bonding attachment to teachers, belief, commitment. Opportunities for prosocial family involvement. Factors - Risks. Lack of guilt and empathy. For training of interventionists who would interact with parents, contact Dr. McMahon above. For training of interventionists who would interact with teachers, contact Dr.

Greenberg above. For training of interventionists who would interact with children, contact Dr. Bierman above.

Conduct Problems Prevention Research Group. Initial impact of the Fast Track prevention trial for conduct problems: I. The high-risk sample. Journal of Consulting and Clinical Psychology, 67 5 , Type of Study: Randomized controlled trial school site level Number of Participants: Summary: To include basic study design, measures, results, and notable limitations This study explored the effectiveness of the Fast Track intervention to alter key child and family risk factors for antisocial development.

Assignment was based upon the school level with children assigned to the intervention and in the control group. Results indicated the Fast Track group children, relative to the children in the control group, progressed significantly in their acquisition of almost all of the skills deemed to be critical protective factors by the model.

Parents in the intervention group, relative to parents in the control group, demonstrated more warmth and positive involvement and less harsh discipline. Limitations possible selection bias, reliance on self-reported measures, and generalizability due to ethnic composition of sample.

Bierman, K. Evaluation of the first 3 years of the Fast Track prevention trial with children at high risk for adolescent conduct problems. Journal of Abnormal Child Psychology, 30 1 , Summary: To include basic study design, measures, results, and notable limitations This study utilizes information from Conduct Problems Prevention Research Group The purpose of the current study was to describe the effects of the Fast Track program at the end of the third grade for children that participated in the intervention and whether continued intervention would contribute to an impact on antisocial behavior at home and at school that persists later in elementary.

Results indicated the children assigned to receive the intervention were significantly less likely to be exhibiting evidence of serious conduct problems than were children in the control group. Additionally, by the end of 3rd grade teachers were reporting greater rates of improvement across the year for intervention children as compared to control children.

Parents also reported less problem behaviors than parents in the control group. Limitations include possible selection bias, reliance on self-reported measures, and generalizability due to ethnic composition of sample. Using the Fast Track randomized prevention trial to test the early-starter model of the development of serious conduct problems. Development and Psychopathology, 14 4 , The Fast Track prevention trial was used to test hypotheses from the Early-Starter Model of the development of chronic conduct problems.

Results indicate as compared to the control group the Fast Track intervention had positive effects on outcomes at home, at school, and in the peer group that were evident after 4 years. Limitations include reliance on self-reported measures and missing data. Length of postintervention follow-up: None - The intervention continued through the end of grade 4 and beyond.

Pinderhughes, E. The effects of the Fast Track program on serious problem outcomes at the end of elementary school.

Journal of Clinical Child and Adolescent Psychology, 33 4 , The best foundation for a positive return on investment involved health services, and as we discuss above, these savings offset only a small portion of the interventions costs. However, one option would be to calculate net benefits using all outcomes, even those for which the intervention effect was not statistically significant.

While some would argue that statistical significance is irrelevant for economic analysis, 43 insignificant effects typically reflect both small effects and large confidence intervals.

In the case of the former, the impact on the economic bottom line is typically small unless the behavior involved is very costly to society.

Interpreting such effects is a risky business. As often as not, insignificant effects may be of unanticipated direction; it is not a given, therefore, that including such effects will improve the returns to the intervention.

Whether a tally of the many null effects would improve even a rough estimate of net benefits is unclear. Insignificant outcome-specific effects often reflect large confidence intervals, and the imprecision involved has important implications for the economic analysis. An essential element of a net benefits calculation is a confidence interval or a cost-effectiveness acceptability curve.

One can see, therefore, that adding insignificant findings to the calculation of net benefits will inflate the resulting confidence interval. Could future analyses reveal that the intervention is cost-effective? Such a development could come from effects appearing where they do not exist now.

The marginal effect on likelihood for jail would provide more promise for cost-effectiveness given the potential for future economic benefits related to such an outcome. However, given what the intervention costs, an effect on time in jail of the size we find here a One would have to discount any savings back to the first year of the intervention.

Aos et al. Obviously, such an effect would have to persist for decades to justify such a costly intervention. First, the outcomes examined here reflect effects observed during measurement windows that are not complete for every outcome. Data are lacking on some potential outcomes, such as the use of mental health services before year 7 when the SACA was added to the study. In that light, the estimates provided here are conservative. It is worth noting that the coverage of key outcomes is best during adolescence, when many of the outcomes are most likely and most costly.

Other limitations involve broader issues reflecting the overall study design and its execution. Even though the sample size is large, randomization involved nine pairs of schools. As a result, the results are potentially subject to unobserved differences between groups. With only nine units, the power of randomization to balance unobserved factors is low.

However, the ability of such unobservables to bias findings seems rather limited. After all, these factors would have to involve school-level differences, the effect of which persisted for years after the children left those schools. Such lasting differences seem somewhat unlikely, given that original schools were drawn from the same communities.

A third limitation also involves study design: while our analyses incorporate site-level differences, the study includes only four sites. Four sites are enough to identify between-community differences but insufficient to really unpack those differences into contextual factors, such as juvenile justice policies or district-level factors.

Future research might consider more sites. While daunting in terms of scope and expense, such research has been conducted in other areas of human services, such as job training or welfare policy.

That research indicates substantial variability in program impact across communities differing in local labor market conditions, economic factors, and so on. However, such differences also make it difficult to do anything other than speculate how a modified intervention might perform in a new community where these conditions differed.

When any intervention fails to produce anticipated effects, one must return to the original program model In particular, developmentalists and prevention scientists need to consider whether and how the underlying theories involving social cognition and other psychological processes really influence individual behavior e.

It remains an open question whether these relationships are truly causal or whether social cognition and aggression both reflect other, unmeasured influences, influences apparently not affected by an intervention like Fast Track. Researchers need to look seriously at the underlying scientific foundations of a generation of interventions. Fast Track relied on the best developmental theory had to offer, and it was largely ineffectual.

The lack of findings raises other questions about assumptions commonly shared in prevention research. As noted, perhaps better targeting might improve the cost-effectiveness of the intervention.

One barrier to the cost-effectiveness of intervention in any area of health or mental health is the fact that prevention often involves devoting resources to subjects who may not develop the disorder of interest. Early intervention may improve effectiveness, but early may not be the most cost-effective. Relatedly, schools may not be the best setting for identifying children needing intensive intervention.

One might look to other social systems, such as child welfare, to identify those children most needing intensive and costly intervention. The evaluation itself raises other important questions. As became apparent through reanalysis of who is in the study, the recruited sample was more diverse than originally reported in multiple publications.

The argument that study children were too severely challenged does not explain the small and inconsistent effects of the intervention.

A related issue involves the families who would not participate — who are these children and families who were more difficult to reach? Perhaps the intervention would have proven cost-effective for these families. Tracking and recruiting these families at additional expense might have proven cost-effective, even if it meant reducing the resources devoted to the intervention itself.

At this point, we do not know. Very little information is available on those children and families. In many ways, the intervention reflects the best of prevention research but also its weaknesses. For example, many developmental studies are based on samples that are not representative e. That the evaluation was wrong about who was in the study hardly distinguishes it within developmental psychology. Another weakness of developmental psychology that shaped the Fast Track project is the lack of data sharing.

For a decade, the investigators did not share data on the intervention. Program developers naturally face a conflict of interest in assessing their own interventions.

And with data so complex, additional analyses are likely to shed new light on whether and how the intervention worked. A handful of researchers can analyze multiple waves of data from multiple sites with multiple informants covering multiple domains only so fast.

Data sharing would have allowed for a fuller assessment of the intervention in a timely manner. In addition, the history of the project reflects the fixation of developmentalists on their theories, but at times theory is the proverbial tail that wags the dog.

Publications from the project on issues of theory outnumber those on the intervention by a ratio of at least five to one. In an intervention project, the bottom line on effectiveness has to be the real bottom line in terms of focus of the research enterprise.

That does not appear to have been the case with Fast Track. This imbalance of resources explains much of the delay in reporting evaluation outcomes. Other weaknesses in developmental psychology and prevention research also are apparent in the study. Many analyses in the field follow a meandering analysis plan—researchers run analyses, change the model, and then run more analyses, increasing the likelihood of chance findings. For example, many papers in psychology rely on modification indices in structural equations software in spite of numerous cautions about their poor statistical properties.

In contrast, the standards of clinical trials specify a predetermined analysis plan, and those standards should guide the evaluation of preventive interventions. In many instances, researchers undertake these nuanced analyses to gain insights into developmental theory. But the fact of the matter is that developmental theory is so non-specific that it can be used to explain any finding post-hoc. The reality is that analyses can proceed until some chance finding proves sufficiently interesting, and theory is then applied.

Furthermore, many of the theory-informed analyses involve complicated statistical issues, and developmental psychology has often gotten them wrong. For example, many developmentalists are interested in whether some psychological mechanism mediates the effect of their intervention. The bottom line is that even in a randomized trial, including mediators or moderators that are not randomized raises difficult statistical issues, whether the addition of those variables is informed by developmental theory or not.

Analysis like the ones presented here are broad in nature, but they could have been planned prior to the study. Such analyses should be reported annually and would provide a background for assessing the more nuanced and in-depth assessments that emerge more slowly.

Analyses like these do not need to be the last word in an evaluation, but they do need to be the first and timely word. The content is solely the responsibility of the authors and does not necessarily represent the official views of the funding agencies. Appreciation is expressed to the school districts, families, and youth who participated in this research, and to the many staff members who contributed to the intervention design and implementation, and to data collection and scoring.

The authors would like to thank Yu Bai for his diligent research assistance in preparing these analyses. Damon Jones played a critical role in all stages of the analyses.

For the economic analyses, an annual review of medical and other records provided detailed information on the services received across various service sectors. These reviews were conducted in each of 6 years from to These years cover project years 9 through 14 for cohort 1, years 8 through 13 for cohort 2, and years 7 through 12 for cohort 3. The process of obtaining agency records can be summarized in four steps:.

When parents or youth indicated receipt of services, they were asked for the name and address of the provider.

Starting in the summer interview year 9 for cohort 1 and year 7 for cohort 3 the respondent was asked to sign an authorization form to allow program staff to obtain agency records from any agencies they identified. In addition to family authorization, successful agency identification relied on accurate information agency name and location provided from the families.

In instances where the agency could not be located because parents or youth had misidentified the provider, project staff would contact families asking for clarification. If authorized, staff located the agency and invited them to participate in the study.

Agencies were informed that the project would attempt to collect billing, medical or other records that might provide details on service type, service costs amounts and payment sources , and number of days treated. As much as possible, the project addressed agency concerns that might lead to refusal. For example, some agencies wanted the project to use agency-created authorization forms, and in those cases, we sought new signatures from the families.

In some instances, agencies were still unable to participate because the records involved had been archived or were otherwise inaccessible. Fast Track staff were trained to record agency information onto record review forms developed for the project.

As one would expect, agency records varied widely in the amount of detail available. While some records would include very specific information broken down by service e. Record reviewers were trained to record agency data as it corresponded to 30 different service types listed in Table A1. Another issue was timing. Agency data needed to be processed to correspond to the Fast Track project year since most other outcomes represent youth and family characteristics on an annual basis. A cut-off was selected to divide agency data to reflect this timing.

Services that were ongoing, especially those that involved inpatient status, would be divided into separate years based on that cut-off. Particulars of service delivery also varied by agency. While we recorded service type information as it corresponded to the services listed in Table A1 , other characteristics of the service delivery such as costs could not be distinguished at the service level, especially if presented in agency records as based on the overall admission rather than broken down at the level of service.

For the economic evaluation, we wanted to estimate i the total dollars spent on services across years, and ii the number of certain service types e. This involved generating totals on these variables across all services, agencies, and years for each subject. In order to derive the count of certain service types, we created categorical indicators for each service type coded from the agency records indicated in the footnote to Table A1 based on nature of the service: mental health, general health, medication, drug-alcohol treatment, juvenile justice service, and pregnancy-related.

In some instances, service type was dictated by the agency category. Before creating summary variables from the full data, we needed to address missing data characteristics of the medical records. As noted, agency data may be missing for any of several reasons. First, the family could refuse to authorize record review.

Second, an agency might be unwilling or unable to provide the needed information. Even if an agency was accessed and cooperative, subject records may still contain missing fields because of incomplete records. Given the unique nature of the agency data, we carried out data imputation for medical records separately from the other outcomes.

Moreover, it was important to impute separate data sets by service sector in order to most effectively model the missing data process using the most relevant information for the services outcomes within similar agencies and for similar needs. The information from the families was used to help set up imputation models for the missing agency medical records since family information on an agency visit would be non-missing in cases where we were not able to complete the agency record review.

The following figure demonstrates the structure of the data when family information was provided but no successful record review occurred. In such cases, imputation models relied on data provided by the families left columns in the example table below. Note: Services were categorized into the following general service types:. Agency medical records were imputed using IVEware Raghunathan As indicated, we divided the medical records database into separate data sets based on the type of service sector so that the nature of service delivery was similar across cases.

This distinction was carried out using the service sector as indicated by the respondent in the SACA since those data had complete information on the service sector of the agency visit i. The following designates the partitions of the data based on the respondent characterization of the agency i. Separate data sets were imputed based on these distinctions:.

Outpatient mental health including day treatment center, substance abuse clinic, in-home provider, mental health center. Inpatient mental health facility psychiatric hospital, group home, residential treatment center. Data were also separated by intervention status before imputations. Imputation bounds variable ranges were set based on frequency distributions for non-missing cases. We imputed five data sets for each service sector data file.

The following variables were included in the imputation process; the services variables were based on the full year of service for each agency reported :. Service type counts for agency-year from the record review e. Service delivery characteristics from the record review bill amount per day, number of days served, project year of service. Other study background characteristics Study site, risk-status, cohort, whether African-American, gender.

Information provided on the service from the family number of days admitted, agency category. Skewed variables from agency data collection — namely billing amount and number of days served — were square-root transformed before imputations. Separate service sector data sets were combined after imputations and agency cost totals were calculated post-imputation by multiplying the bill-per-day rate by the number of days served at that facility.

Agency amounts were then summed across years for each subject and merged with the full Fast Track sample. This assumes that non-reporting of services by families is valid information and that, indeed, the youth received no services and spent zero dollars on agency costs across the 6 years.

Families who dropped out of the Fast Track study before the SACA was administered were not included in statistical analyses of the medical records given the difficulty in imputing what agency a person would be admitted to, which is necessary information in order to impute record review outcomes. The final services data included total services information across years per-subject, combining both agency data and respondent-report of service delivery. Project year number will be the same for all youth within their cohort, although grade number will differ for those who have repeated grades or dropped out of school.

National Center for Biotechnology Information , U. J Ment Health Policy Econ. Author manuscript; available in PMC Aug Michael Foster. Author information Copyright and License information Disclaimer.

Copyright notice. See other articles in PMC that cite the published article. Abstract Background Antisocial behavior is enormously costly to the youth involved, their families, victims, taxpayers and other members of society.

Aim The Fast Track intervention is a year, multi-component prevention program targeting antisocial behavior. Methodology The intervention is being evaluated through a multi-cohort, multi-site, multi-year randomized control trial of program participants and comparable children and youth in similar schools, and that study provides the data for these analyses. Results and Discussion The intervention lacked both the breadth and depth of effects on costly outcomes to demonstrate cost-effectiveness or even effectiveness.

Limitations The outcomes examined here reflect effects observed during measurement windows that are not complete for every outcome. Conclusion and Implications The most intensive psychosocial intervention ever fielded did not produce meaningful and consistent effects on costly outcomes.

Future Research Future research should consider alternative approaches to prevention youth violence. Introduction The costs of a life of crime include government expenditures for criminal justice investigation, arrest, adjudication, and incarceration; costs to victims, such as medical costs, time missed from work, the value of stolen property as well as loss of life; and costs that accrue to the criminal and his or her family, such as lost wages.

Method Participants The intervention is being evaluated through a multi-cohort, multi-site, multi-year randomized control trial of program participants and comparable children and youth in similar schools, and that study provides the data for these analyses.

The Fast Track Intervention The intervention was delivered in project years 2 through 11 grades 1 through Key Outcome Domains This article examines four outcome domains: I health and mental health services, II delinquency and involvement in the juvenile justice system, III school failure and special education services, and IV substance use. Open in a separate window. Domain I: Health and Mental Health Services Starting in year 7 of the project grade 6 for most study children and continuing annually through year 13, parents were interviewed using a modified, minute version of the Service Assessment for Children and Adolescents SACA.

Analytical Model Pooling across Sub-groups Potential heterogeneity of the treated and moderation of effects shaped our analytic approach. Modeling Outcomes Regression models were selected based on the distribution of the outcomes. Results Table 2 provides intervention effects across the outcomes described above, including pooled estimates. Court Records Analyses of these data focused on three outcomes. Domain IV: Substance abuse Table 2 shows the results of the pooled estimates for the substance abuse outcomes.

Variation Across Subgroups The right column in Table 2 presents significant study-level effects based on empirical Bayes estimates. Discussion This article provides the first and only comprehensive assessment of the Fast Track intervention on outcomes most relevant to an assessment of program costs and benefits. In sum, these analyses suggest that the intervention lacked both the breadth and depth of effects on costly outcomes to demonstrate cost-effectiveness or even effectiveness Future analyses may reveal nuanced effects for sub-groups or for fine-grained outcome measures.

Assessing Cost-Effectiveness This article does not include a full calculation of net health benefits or other global assessment of the return on investment. Limitations First, the outcomes examined here reflect effects observed during measurement windows that are not complete for every outcome. Broader Implications for Prevention Research When any intervention fails to produce anticipated effects, one must return to the original program model Appendix A Agency Record Review For the economic analyses, an annual review of medical and other records provided detailed information on the services received across various service sectors.

The process of obtaining agency records can be summarized in four steps: Step 1: Respondents Identify Agencies Providing Services When parents or youth indicated receipt of services, they were asked for the name and address of the provider. Step 2: Obtaining Cooperation of the Agencies Involved If authorized, staff located the agency and invited them to participate in the study.

Step 3: Recording Services Information in a Database Fast Track staff were trained to record agency information onto record review forms developed for the project. Step 4: Data Processing for Economic Evaluation For the economic evaluation, we wanted to estimate i the total dollars spent on services across years, and ii the number of certain service types e.

Missing Data Before creating summary variables from the full data, we needed to address missing data characteristics of the medical records.



0コメント

  • 1000 / 1000