Program theory evaluation: Logic analysis

https://doi.org/10.1016/j.evalprogplan.2010.04.001Get rights and content

Abstract

Program theory evaluation, which has grown in use over the past 10 years, assesses whether a program is designed in such a way that it can achieve its intended outcomes. This article describes a particular type of program theory evaluation—logic analysis—that allows us to test the plausibility of a program's theory using scientific knowledge. Logic analysis is useful for improving the intervention or finding alternatives for achieving intended outcomes; it influences the choice of type of evaluation to conduct and strengthens the validity of subsequent evaluations.

The objective of this article is to present the methodological principles and the roots of this type of program theory evaluation. We illustrate two types of logic analysis with two actual evaluation cases. There are very few published examples of program theory evaluation. This article will provide evaluators with both theoretical and practical information to help them in conceptualizing their evaluations.

Introduction

“Programs are complicated phenomena, generally born out of experience and professional lore” (Weiss, 1998). When requesting an evaluation, stakeholders generally want to know if what they are doing works and how they might improve their intervention. Program theory evaluation can often provide that kind of information without mobilizing the research efforts of an effect analysis.

Weiss defines program theory as “the mechanisms that mediate between the delivery (and receipt) of the program and the emergence of the outcomes of interest” (Weiss, 1998). Logic modelling is generally presented in the evaluation literature as a way to open the black box to better understand finer causal mechanisms. Some sophisticated logic modelling approaches include evidence-based depictions of the causes of the targeted problem, to better appreciate the intervention's potential impact (Renger and Hurley, 2006, Renger and Titcomb, 2002). While this is an important step in better understanding the action mechanisms of the intervention, it is still not enough. As evaluators, we should also question the validity of the intervention's chain of action (validity of the means), and we should test the scientific plausibility of the program's theory. In fact, it could be argued that program theory does not really reflect how the intervention produces the intended outcomes, but rather, stakeholders’ perceptions and beliefs, right or wrong, about the mechanisms that operate between the delivery of the intervention and the intended outcomes. The whole evaluation is then built on the consensus reached on stakeholders’ beliefs and perceptions. But what do we do as evaluators if these are incomplete or, worse, if they are wrong? Can we really build valid evaluations based on the prior analysis of a program's theory that reflects what people think, but not what the intervention does (Chen, 1990a, Chen, 1990b)?

As we will argue here, testing the program's theory before entering more deeply into the evaluation process can provide important insights into the validity of the intervention's means of action while mobilizing stakeholders in a valuable exercise of reflection (Pawson et al., 2005, Renger and Hurley, 2006, Page et al., 2007). It can also help in choosing the type of subsequent evaluation that will best suit the context and the intervention's characteristics. There is an important body of documented experience of program theory evaluation (Campbell, 1966, Chen and Rossi, 1987, Lipsey, 1989, Marquart, 1990, Smith, 1990, Weitzman et al., 2002). Several approaches and methodologies have been explored (Bickman, 1987a, Conrad and Miller, 1987, Cook, 2000, Lipsey, 1993, Weiss, 2000). Yet, in most cases important dilemmas were raised and significant evaluation effort was expended (Weiss, 1997), which may help explain why program theory evaluation is not more systematically used. In this article, we describe our experience of a certain type of program theory evaluation called logic analysis (Champagne et al., 2009a, Contandriopoulos et al., 2000). Logic analysis is a program theory evaluation based on existing knowledge that uses expert judgment and scientific literature reviews. We begin with a presentation of the conceptual, methodological, and historical aspects of logic analysis. Then we illustrate the discussion with two examples, logic analysis of the Research Collective on Primary Care and logic analysis of integrated services for persons with dual diagnoses of mental health and substance use disorders.

Section snippets

Definition

Logic analysis is an evaluation that allows us to test the plausibility of a program's theory using available scientific knowledge—either scientific evidence or expert knowledge (Champagne et al., 2009a, Contandriopoulos et al., 2000, Rossi et al., 2004). First, a clarification: logic analysis is not logic modelling. The model we present here has its roots in the approach that has been developed and taught for the past 20 years at the University of Montreal. This new proposal is built upon our

Two examples of logic analysis

We used logic analysis twice, on different interventions. The evaluation of the Research Collective on Primary Care Services (Brousselle et al., 2009, Contandriopoulos et al., 2008) illustrates direct logic analysis, while the logic analysis of integrated services for persons with dual diagnoses of mental health and substance abuse provides an example of reverse logic analysis (Brousselle, Lamothe, Mercier, & Perreault, 2007). Both interventions can be considered complex, as their

Discussion

Logic analysis proved to be an important evaluation to conduct before launching other evaluation activities. In fact, in both cases described above, it had important consequences for the various stakeholders, but also for the evaluation itself.

Let us consider the impacts of direct logic analysis on the evaluation of the Research Collective. First, our use of a deliberative strategy to discuss the conceptual framework and to assess the Collective's impact potential with the researchers clearly

Conclusion

In 1997, Weiss underlined the fact that theory-based evaluation “has generated considerable interest, but it appears to be having only marginal influence on evaluation practive” (Weiss, 1997). Thirteen years later, this is still the case. Her article raises several possible explanations for this phenomenon. Half are related to the difficulties of constructing the program theory due to the multiple theories of the agents or because of the hyperrationalism of the modelling approach compared to

Acknowledgements

The authors gratefully acknowledge the Fonds de Recherche en Santé du Québec and the Canadian Institutes of Health Research for their support.

Astrid Brousselle is a professor in the Department of Community Health Sciences, University of Sherbrooke, Canada, and researcher at the Charles LeMoyne Hospital Research Centre. Her areas of expertise are evaluation and health economics and health services research.

References (106)

  • D. Austen-Smith

    Information and influence: Lobbying for agendas and votes

    American Journal of Political Science

    (1993)
  • F.R. Baumgartner et al.

    Basic interests: The importance of groups in politics and in political science

    (1998)
  • J.M. Berry

    The interest group society

    (1997)
  • J.M. Beyer et al.

    The utilization process: A conceptual framework and synthesis of empirical findings

    Administrative Science Quarterly

    (1982)
  • L. Bickman

    Editor's notes

  • L. Bickman
    (1987)
  • L. Bickman
    (1990)
  • J.D. Birckmayer et al.

    Theory-based evaluation in practice: What do we learn?

    Evaluation Review

    (2000)
  • P. Brisson

    Développement du champ québécois des toxicomanies au XXe siècle

  • A. Brousselle et al.

    How logic analysis can be used to evaluate knowledge transfer initiatives: The case of the Research Collective on the Organization of Primary Care Services

    The International Journal of Theory, Research and Practice

    (2009)
  • Brousselle, A., Lamothe, L., Sylvain, C., Foro, A., & Perreault, M. (accepted). Key enhancing factors for integrating...
  • Brousselle, A., Lamothe, L., Sylvain, C., Foro, A., & Perreault, M. (in press). Integrating services for patients with...
  • D.T. Campbell

    Pattern matching as an essential in distal knowing

  • D.P. Carpenter et al.

    The strength of weak ties in lobbying networks: Evidence from health-care politics in the United States

    Journal of Theoretical Politics

    (1998)
  • D.P. Carpenter et al.

    The strength of strong ties: A model of contact-making in policy networks with evidence from U.S. health politics

    Rationality and Society

    (2003)
  • D.P. Carpenter et al.

    Friends, brokers, and transitivity: Who informs whom in Washington politics?

    Journal of Politics

    (2004)
  • Champagne, F. (2002) The ability to manage change. Ottawa: Commission on the future of health care in Canada: the...
  • F. Champagne et al.

    L’analyse logique

  • F. Champagne et al.

    Modéliser les interventions

  • F. Champagne et al.

    Évaluation de la programmation régionale de soins ambulatoires

    (2001)
  • F. Champagne et al.

    Introduction: Towards a broader understanding of the use of knowledge and evidence in health care

  • H.T. Chen

    Issues in constructing program theory

  • H.T. Chen

    Theory-driven evaluations

    (1990)
  • H.-T. Chen

    The roots of theory-driven evaluation. Current views and origins

  • H.T. Chen

    Practical program evaluation: Assessing and improving planning, implementation and effectiveness

    (2005)
  • CHSRF

    The theory and practice of knowledge brokering in Canada's health system

    (2003)
  • CHSRF

    Conceptualiser et regrouper les données probantes pour guider le système de santé

    (2005)
  • K.J. Conrad et al.

    Measuring and testing program philosophy

  • D.A. Conrad et al.

    Integrated health systems: Promise and performance

    Frontiers of Health Services Management

    (1996)
  • A.-P. Contandriopoulos et al.

    Intégration des soins: Concepts et mise en œuvre

    (2001)
  • A.-P. Contandriopoulos et al.

    L’évaluation dans le domaine de la santé : Concepts et méthodes

    Revue d’épidémiologie et santé publique

    (2000)
  • D. Contandriopoulos et al.

    Evaluating interventions aimed at promoting information utilization in organizations and systems

    Healthcare Policy

    (2008)
  • T.D. Cook

    The false choice between theory-based evaluation and experimentation

  • E.J. Davidson

    Ascertaining causality in theory-based evaluation

  • P. Davis

    The limits of realist evaluation: Surfacing and exploring assumptions in assessing the best value performance regime

    Evaluation

    (2005)
  • J.-L. Denis et al.

    Convergent evolution: The academic and policy roots of collaborative research

    Journal of Health Services Research & Policy

    (2003)
  • S.I. Donaldson

    Theory-driven program evaluation in the new millennium

  • S.I. Donaldson

    Program theory-driven evaluation science: Strategies and applications

    (2007)
  • P.H. Feldman et al.

    State health policy information: What worked?

    Health Affairs

    (1997)
  • M.-J. Fleury

    Émergence des réseaux intégrés de services comme modèle d’organisation et de transformation du système sociosanitaire

    Santé mentale au Québec

    (2002)
  • Cited by (0)

    Astrid Brousselle is a professor in the Department of Community Health Sciences, University of Sherbrooke, Canada, and researcher at the Charles LeMoyne Hospital Research Centre. Her areas of expertise are evaluation and health economics and health services research.

    Francois Champagne is a professor in the Department of Health Administration and senior researcher in the Institute of Health Research of the University of Montreal. His areas of expertise are evaluation and health services research.

    View full text