Visions and strategies to improve evaluation of health information systems: Reflections and lessons based on the HIS-EVAL workshop in Innsbruck

https://doi.org/10.1016/j.ijmedinf.2004.04.004Get rights and content

Abstract

Background: Health care is entering the Information Society. It is evident that the use of modern information and communication technology offers tremendous opportunities to improve health care. However, there are also hazards associated with information technology in health care. Evaluation is a means to assess the quality, value, effects and impacts of information technology and applications in the health care environment, to improve health information applications and to enable the emergence of an evidence-based health informatics profession and practice. Objective: In order to identify and address the frequent problems of getting evaluation understood and recognised, to promote transdisciplinary exchange within evaluation research, and to promote European cooperation, the Exploratory Workshop on “New Approaches to the Systematic Evaluation of Health Information Systems” (HIS-EVAL) was organized by the University for Health Sciences, Medical Informatics and Technology (UMIT), Innsbruck, Austria, in April 2003 with sponsorship from the European Science Foundation (ESF). Methods: The overall program was structured in three main parts: (a) discussion of problems and barriers to evaluation; (b) defining our visions and strategies with regard to evaluation of health information systems; and (c) organizing short-term and long-term activities to reach those visions and strategies. Results: The workshop participants agreed on the Declaration of Innsbruck (see Appendix B), comprising four observations and 12 recommendations with regard to evaluation of health information systems. Future activities comprise European networking as well as the development of guidelines and standards for evaluation studies. Conclusion: The HIS-EVAL workshop was intended to be the starting point for setting up a network of European scientists working on evaluation of health information systems, to obtain synergy effects by combining the research traditions from different evaluation fields, leading to a new dimension and collaboration on further research on information systems’ evaluation.

Introduction

Nowadays, it is hard to imagine health care without Information and Communication Technology (ICT). Information technology in health care has existed for about three decades, and has gained widespread usage. Electronic patient records offer health care professionals access to vast amounts of patient-related information; decision support systems support clinical actions; and knowledge servers allow direct access to state-of-the-art clinical knowledge to support evidence-based medical practice. Communication technology has provided standardized healthcare-related communication protocols, which enable exchange of all kinds of information among health care parties. Networked health care environments are being developed in which regional health information systems support seamless care and thus enable provision of and access to health services and health-related information across organizational, regional and national boundaries. Health care is indeed entering the Information Society [1], [2].

The term ICT refers to technologies as such. Whether the use of these technologies is successful depends not only on the quality of the technological artifacts but also on the actors, i.e. the people involved in information processing and the organizational environment in which they are employed. ICT embedded in the environment, including the actors, is often referred to as an Information System (IS) in a sociotechnical sense [3].

Introduction of ICT can radically affect health care organization and health care delivery and outcome. It is evident that the use of modern ICT offers tremendous opportunities to support health care professionals and to increase the efficiency, effectiveness and appropriateness of care [4], [5]. However, there can also be hazards associated with information technology in health care. ICT can be inappropriately specified, have functional errors, be unreliable, user-unfriendly, ill-functioning or the environment may not be properly prepared to accommodate the ICT in the working processes (compare, e.g. [6], [7], [8]). Such breakdowns and failures may negatively affect the working processes and decisions of health care providers and may result in harm for the patients, i.e. ICT can create adverse side effects in the care process [9]. Good medical practice implies that one is aware of the possible side effects of one’s actions and that one has insight into the implication of such side effects. Similarly, there is a need for evaluation of ICT systems that are (intended to be) in operation in a health care environment to identify potential side effects. Such evaluations should not only be carried out during operation (summative evaluation)—like in post-marketing surveillance of drugs—but also during system development (constructive, formative evaluation during system analysis, design, and implementation) as to avoid the potential misalignment of the intended system and the system actually being developed as well as to identify harmful consequences as early as possible.‘Evaluation’ is often defined as the act of measuring quality characteristics of an object. However, those measures have no value in themselves—they need a context within which they are judged or used: there has to be a question to be answered. We, therefore, prefer to use the concept of ‘evaluation’ in the following sense:

Evaluation is the act of measuring or exploring properties of a health information system (in planning, development, implementation, or operation), the result of which informs a decision to be made concerning that system in a specific context.

Typical evaluation questions are, for example (cp. also [10]):

  • Is the technology usable in the intended environment and for the intended user group and task? Do the different user groups (e.g. physicians, nurses, and administrative staff) accept the ICT and use it as intended? What are the patterns in the users’ attitude towards the (future) system, and their pattern of behaviour? Have the users had sufficient training and guidance to be able to use the technology appropriately?

  • How does the technology affect structural or process quality (e.g. time saving, data quality, clinical workflow)? What are the effects of an information system on the quality of patient care (outcome quality)? To what extent does the information system meet not only the requirements but also the objectives? What are the reasons for the observed effects?

  • What are the investment and operational costs of ICT-based solutions? Are they cost-effective? What is their return on investment?

  • What are the problem areas of an information system in daily operation? What are current pitfalls with it, and how can it be improved?

  • What are the organizational and social consequences of introducing ICT into health care environments and how can we include these aspects into design, development and installation to achieve the planned changes in the working structures, work content and work environments?

The necessity, but also the complexity, of evaluation studies has been discussed in the literature for some years now (compare, e.g. [11], [12], [13], [14], [15], [16]). Reasons for problems encountered during evaluation studies were identified as follows:

  • Insufficiently available evaluation methods, guidelines and toolkits to cope with the complexity of health care information systems originating from a combination of technical as well as organizational and social issues.

  • Insufficient collaboration between evaluation researchers from different academic fields and traditions.

  • Little support by methods and guidelines for constructive (formative) evaluation in an implementation or installation project, since many studies focus on summative aspects.

  • Often insufficient and costly evaluation studies are carried out, which do not ask or are not able to answer the important questions of information systems evaluation.

  • Limited value of evaluation reports to others, because these lack sufficient information enabling others to adopt the approach or to judge the validity of the conclusions given.

Additionally, the innate organizational resistance to evaluation has been identified as a barrier for doing evaluation studies [17]. Reasons include the diversion of resources from activities that are perceived as more creative, the reluctance to find and publicize “failures” or “mistakes”, and concern about encouraging damage-seeking litigation. To counterbalance this, better publicity of evaluation approaches, but above all of the proven benefits of evaluation and adoption of lessons learned, are needed.

There have been some earlier initiatives to address these problems.

  • SYDPOL comprised a number of Nordic collaborative projects under the Nordic Council of Ministers. Its Working Group 5 (1986–1989) focussed on computer-based decision support for clinical work within health care, resulting in guidelines for user evaluation of medical decision support systems [18], [19].

  • In 1989 a workshop was organized around “System Engineering in Medicine”, sponsored by European commission under the Medical and Health Research Programme (COMAC-BME). One topic of that workshop was devoted to evaluation of decision support systems [20]. One of the conclusions was that there was a need for further development of methods for evaluation. This topic was further elaborated in the accompanying measure under the COMAC-BME programme. The SYDPOL report was one of the input sources for the work of this group. Although no final conclusions have been published, papers by Wyatt and Spiegelhalter reflect some of the topics discussed during that time [21], [22].

  • ASSIST was an EU fourth Framework project (1989–1990) with the purpose of developing a framework for assessment of medical applications. The framework was focussed on identifying important dimensions, issues and criteria for assessment rather than on developing guidelines or methodologies for assessment (unfortunately, there is no publicly available material on this work).

  • A working conference on the topic “Assessment of Medical Informatics Technology” was held in Montpellier [23] as early as 1990 with the aim to (1) “develop a dialogue between the fields of medical informatics and Health Technology Assessment in order to share current states of progress and to build an agenda for future work and collaboration”; and (2) “to issuing recommendations about the methodological requirements for evaluating Health Information Systems”. Their recommendations are among others (i) the need to clarify and define a terminology; (ii) the identification of possible techniques and methods; (iii) the urge for constructive assessment in a life-cycle perspective; (iv) the emphasis shall go far beyond that of the technology alone and include the aspects also addressed in Health Technology Assessment (HTA); (v) inter-disciplinarity with (inter)national cooperation and exchange of data; (vi) enhancing the ideology of health care assessment in all respects by promotion, dissemination and education. A concrete result of this meeting was the establishment of the IMIA working group on Technology Assessment and Quality Development.

  • A number of assessment activities have taken place in the AIM (Advanced Informatics in Medicine) programme of the EU. The concerted action ATIM (Assessment of information technology in medicine, 1993–1994) aimed at making an inventory of approaches towards evaluation methodologies. ATIM mainly considered two types of applications: (1) knowledge-based systems; and (2) imaging systems and clinical workstations. ATIM gathered methodological approaches and experiences from the other AIM projects such as KAVAS-II, EURODIABETA, TELEGASTRO, ISAR, COVIRA, SAMMIE, EurIpacs and MILORD, the result of which is published as a number of individual contributions in [24].

  • An ESPRIT Project, Megataq, running under the EU fifth Framework Programme (http://www.megataq.mcg.gla.ac.uk/information.html), had as its topic evaluation of IT systems, however focused on CSCW systems (computer supported cooperative work) and usability aspects. One of the team’s obligations was to make a list of assessment studies published in the literature, yet they delimited themselves to the field of informatics (information technology, computer science) and consequently missed the opportunity to harvest the experiences in application-oriented domains like medical informatics. Unfortunately, their website has been inactive since mid 2002.

  • The VATAM Project (validation of telematics applications in medicine; http://www-vatam.unimaas.nl) was launched 1996 based on the results of other EU funded projects (primarily ATIM) as an accompanying measure on criteria and methods for the validation of projects with emphasis on health care. The purpose of VATAM was to take stock of validation of health care telematics applications in EU research projects with the objective to enhance them. The initiative made recommendations to improve validation and provided guidelines describing the main steps of evaluation [25]. The goal to provide a list of usable tools was less successful, but formed the basis for more recent work.

  • In addition to this work several books address the issue of evaluation of IS and ICT, e.g. [26], [27], [28] (an annotated bibliography is available at http://www.umit.at/efmi/bibliography.htm).

Also the IMIA working groups on Organizational and Social Issues and on Technology Assessment and Quality Improvement recognized the need for assembling and disseminating of knowledge and experience on evaluation of ICT in health care. This resulted in a workshop in Helsinki in 1998, the proceedings of which have been published in a special issue of the International Journal of Medical Informatics [29].

Networking initiatives such as the Working Group for Assessment of Health Information Systems of the European Federation for Medical Informatics (EFMI WG EVAL, http://www.umit.at/efmi) and the Working Group Technology Assessment and Quality Improvement of the International Medical Informatics Association (IMIA WG TA, http://www.imia.org) try to support interdisciplinary information exchange, e.g. by organizing workshops and tutorials at medical informatics conferences or by offering a database of assessment publications. However, the voluntary basis of those initiatives limits their impact.

Overall, the problems seem clear; however, the solutions still are not satisfying. The recommendations given in the Montpellier meeting seem still to be valid today. Why is it that so little progress seems to have happened over more than a decade? One important aspect making progress so difficult seems to be the transdisciplinarity of evaluation theory and practice, meaning to combine and adapt methods and approaches from various disciplines to best solve the problem at hand, needing a combined expertise, e.g. from medical informatics, computer science, biostatistics, psychology, social sciences and health economics [30].

Each research tradition has its unique set of methods, tools and guidelines, enabling it to answer specific evaluation questions. Progress has been made, but the awareness that other domains have to be taken into account is not yet widespread in the medical informatics community. A transdisciplinary discussion, in order to promote evaluation research, is still at the beginning. A second factor making progress difficult is the lack of strong published evidence of the benefits gained from investing skills and resources into evaluation studies, either into individual studies or into the development of methodologies. A third reason seems to be the resistance of decision makers and health IT system proponents to the idea of “mistakes” being identified and highlighted [9].

In order to promote transdisciplinary collaboration within evaluation research and evaluation practice, an Exploratory Workshop on “New Approaches to the Systematic Evaluation of Health Information Systems” (HIS-EVAL) was organized by the University for Health Sciences, Medical Informatics and Technology (UMIT) in Innsbruck, Austria. It took place from 4 to 6 April 2003 and was funded by the European Science Foundation (ESF).

The objectives of the workshop were:

  • to bring together experts from computer science, medical or health informatics, economics, health care, health care management, biostatistics, psychology, sociology, and other disciplines, in order to foster a dialogue and exchange on methodological issues between researchers from different traditions;

  • to offer an opportunity for the participating scientists to share their knowledge with the aim of obtaining a profitable cross-fertilization among different fields of expertise and especially between quantitative and qualitative evaluation research;

  • to initiate a combined research agenda to develop guidelines and toolkits for information systems’ evaluation for an adequate use and combination of evaluation methods and tools;

  • to discuss and clarify the networking needs in long-term evaluation research in medical informatics, and to initiate combined research activities at a European level.

In total, 23 researchers from 10 European countries participated in the workshop (see Appendix A).

Section snippets

Methods

The HIS-EVAL workshop was organized alternately around plenary discussion sessions and smaller working groups. The outcome of the plenary discussion was used to refine or make concrete tasks for the succeeding working group sessions.

The overall program was structured along three major questions:

  • 1.

    What are problems and barriers to evaluation of health information systems? In this first part, the experiences from the various evaluation fields were gathered and structured. The different viewpoints

Results

The main points of discussion are shortly summarized below. A detailed protocol of the workshop is available at http://bisg.umit.at/hiseval. The following summary is structured according to the three main parts of the workshop.

Discussion

The HIS-EVAL workshop brought together experts in IS evaluation with various backgrounds and traditions to discuss theory and practice of evaluation of IS in health care. It was an initiative to build up an enduring evaluation research network for health information systems on a European level, with further activities such as research proposals, conferences, tutorials, workshops, and publication activities.

The Declaration of Innsbruck is a first important result. Many visions and possible

Acknowledgements

We appreciate the support of the European Science Foundation, enabling us to accomplish the ESF Exploratory Workshop on New Approaches to the Systematic Evaluation of Health Information Systems (HIS-EVAL). A special thanks to Hui Wang for his support in this respect, to Gudrun Hübner-Bloder and Frieda Kaiser for local organization, and to Karl-Peter Pfeiffer for scientific support.

References (31)

  • P Beynon-Davies et al.

    When health information systems fail

    Top Health Inf. Manage.

    (1999)
  • M Rigby et al.

    Verifying quality and safety in health informatics services

    BMJ

    (2001)
  • A Stoop et al.

    Integrating quantitative and qualitative methods in patient care information system evaluation—guidance for the organizational decision maker

    Int. J. Med. Inf.

    (2003)
  • D.E. Forsythe, B.G. Buchanan, Broadening our approach to evaluating medical information systems, in: P. Clayton (Ed.),...
  • J. Wyatt, D. Spiegelhalter, Field trials of medical decision-aids: potential problems and solutions, in: P. Clayton...
  • Cited by (205)

    • Identifying and addressing digital health risks associated with emergency pandemic response: Problem identification, scoping review, and directions toward evidence-based evaluation

      2022, International Journal of Medical Informatics
      Citation Excerpt :

      Systematic evaluation of what has been implemented in an emergency seems to be a long-standing ethical imperative to ensure robust digital ecosystems. As IMIA opinion leaders have highlighted [10,81], evidence-based health informatics should be the backbone of digital health solution development and implementation [80,112]. Hence, we believe that digital technologies developed for acute response require a more dynamic mechanism to incorporate evidence-based evaluation.

    • Use of standardized terminologies in clinical practice: A scoping review

      2021, International Journal of Medical Informatics
    View all citing articles on Scopus
    View full text