Maximizing both external and internal validity in longitudinal true experiments with voluntary treatments: The “combined modified” design

https://doi.org/10.1016/S0149-7189(96)00029-8Get rights and content

Abstract

Human services interventions are most rigorously evaluated with true experimental designs in longitudinal experimental field trials (LEFTs). However, differential self-selection or attrition often pose a serious threat to the LEFTs internal validity. This threat can be largely overcome by describing all conditions in advance to prospective subjects and securing their agreement to participate in and complete whichever condition is selected at random by a Lottery. This solution, however, in turn then poses the external validity problem that the program's effects on those who would participate in a Lottery may well be different than its effects on those who would participate in any single condition. In the present paper, we describe a new design, termed the Combined Modified Design, which assesses and overcomes this problem. This new design, in which a modified version of the Randomized Invitation Design (in which only one condition, assigned at random, is described to a potential subject, but outcome measures are obtained on everyone) is combined with the Lottery LEFT, is illustrated with a hypothetical example.

References (70)

  • R.A. Berk

    An introduction to sample selection bias in sociological data

    American Sociological Review

    (1983)
  • R.A. Berk et al.

    Police response to family violence incidents: An analysis of an experimental design with incomplete randomization

    Journal of the American Statistical Association

    (1988)
  • C. Beuhler et al.

    Description and evaluation of the Orientation for Divorcing Parents: Implications for postdivorce prevention programs

    Family Relations

    (1992)
  • A. Biglan et al.

    Do smoking prevention programs really work? Attrition and the internal validity of an evaluation of a refusal skills training program

    Journal of Behavioral Medicine

    (1987)
  • M.H. Birnbaum et al.

    Mediated models for the analysis of confounded variables and self-selected samples

    Journal of Educational Statistics

    (1989)
  • C.H. Blakely et al.

    The fidelity-adaptation debate: Implications for the implementation of public sector social programs

    American Journal of Community Psychology

    (1987)
  • H.S. Bloom

    Accounting for no-shows in experimental evaluation designs

    Evaluation Review

    (1984)
  • S.L. Braver

    From efficacy to effectiveness: Who participates in research trials

  • D.T. Campbell

    Guidelines for monitoring the scientific competence of preventive research centers: An exercise in the sociology of scientific validity

    Knowledge: Creation, Diffusion, Utilization

    (1987)
  • D.T. Campbell et al.

    Experimental and quasi-experimental designs for research on teaching

    (1963)
  • C.M. Cassel et al.

    Some uses of statistical models in connections with the nonresponse problem

  • D.W. Chapman

    An investigation of non-response imputation procedures for the Health and Nutrition Examination Survey

    (1974)
  • J. Cohen

    Statistical power analysis for the behavioral sciences

    (1977)
  • T.D. Cook

    A quasi-sampling theory of the generalization of causal relationships

  • T.D. Cook et al.

    Quasi-experimentation: Design and analysis issues for field settings

    (1979)
  • T.D. Cook et al.

    Program evaluation

  • K.L. Delucchi

    Methods for the analysis of binary outcome results in the presence of missing data

    Journal of Clinical and Consulting Psychology

    (1994)
  • S.R. Delusé et al.

    Who volunteers for programs to help their children: A test of a recruitment method and a theoretical extension

  • M.L. Dennis

    Ethical and practical randomized field experiments

  • T.E. Duncan et al.

    Modeling incomplete data in exercise behavior research using structural equation methodology

    Journal of Sport and Exercise Psychology

    (1994)
  • J.H. Ellenberg

    Selection bias in observational and experimental studies

    Statistics in Medicine

    (1994)
  • R.G. Frank et al.

    Research selection bias and the prevalence of depressive disorders in psychiatric facilities

    Journal of Consulting and Clinical Psychology

    (1985)
  • V. Garvin et al.

    Children of divorce: Predictors of change following preventive intervention

    American Journal of Orthopsychiatry

    (1991)
  • W.B. Hansen et al.

    Attrition in prevention research

    Journal of Behavioral Medicine

    (1985)
  • J.J. Heckman

    Sample selection bias as a specification error

    Economertica

    (1979)
  • Cited by (34)

    • Modernizing our way out or digging ourselves in? Reconsidering the impacts of efficiency innovations and affluence on residential energy consumption, 2005–2015

      2019, Journal of Environmental Management
      Citation Excerpt :

      For instance, in terms of the latter, energy consumption in the United States initially fell during the Great Recession, but rebounded as the economy recovered over the next several years. Research that is restricted to either one of these periods (pre- or post-recession) runs the risk of offering conclusions that may be tainted by events of the periods—a phenomenon generally referred to as history threat to the internal validity of analytical models (see Braver and Smith, 1996; Singleton and Straits, 2018). Our study avoids this problem by using several datasets from the Residential Energy Consumption Survey that span a decade.

    • Effectiveness of energy healing on Quality of Life: A pragmatic intervention trial in colorectal cancer patients

      2014, Complementary Therapies in Medicine
      Citation Excerpt :

      Faith in God or a spiritual power has previously been associated with increased use of both CAM in general and healing in particular,10,11 but no studies so far have sought to explore faith as a possible moderator of the effect of healing. Some researchers have voiced their concern that, although being the Gold Standard in clinical research, traditional RCTs may not be externally valid, the main reason being that the results cannot be generalized to the general population, which includes people with strong treatment preferences.12–14 A systematic review of 32 medical and psychological studies employing one of two types of combined designs (1: randomization into randomization or no-randomization group, or, 2: randomization or self-selection group for those refusing randomization) found, however, that outcome differences between randomized and self-selection groups were relatively small and inconsistent in direction.15

    View all citing articles on Scopus

    The writing of this paper was facilitated by NIMH grant no. P50-MH39246 to support the Arizona State University Preventive Intervention Research Center.

    The authors would like to thank the following colleagues for their critical reading of earlier drafts of this article: Drs Steven West, Irwin Sandler, Sharlene Wolchik, Mary Walton Braver, William Griffin, David MacKinnon and Steven Spaccarelli. Also thanks to Drs Rick Price and Amiram Vinokur and the members of the Arizona State University Prevention Research Postdoctoral Seminar for earlier reactions to the ideas contained herein.

    View full text