Article Text

Download PDFPDF

A clinical decision support system for chronic pain management in primary care: usability testing and its relevance
  1. Kalpana Nair,
  2. Raheleh Malaeekeh,
  3. Inge Schabort,
  4. Paul Taenzer,
  5. Arun Radhakrishnan and
  6. Dale Guenter
  1. Department of Family Medicine, McMaster University, Hamilton, Canada
  2. Department of Psychiatry, University of Calgary, Calgary, Canada
  3. Department of Family and Community Medicine, University of Toronto, Toronto, Canada
  1. Author address for correspondence: Kalpana Nair, PhD, Infant and Child Health Lab, Department of Family Medicine, McMaster University, David Braley Health Sciences Centre, Hamilton, ON L8P 1H6, Canada nairk{at}mcmaster.ca

Abstract

Background Clinical decision support systems (CDSSs) that are integrated into electronic medical records may be useful for encouraging practice change compliant with clinical practice guidelines.

Objective To engage end users to inform early phase CDSS development through a process of usability testing.

Method A sequential exploratory mixed method approach was used. Interprofessional clinician participants (seven in iteration 1 and six in iteration 2) were asked to ‘think aloud’ while performing various tasks on the CDSS and then complete the System Usability Scale (SUS). Changes were made to the CDSS after each iteration.

Results Barriers and facilitators were identified: systemic; user interface (most numerous barriers); content (most numerous facilitators) and technical. The mean SUS score was 81.1 (SD = 12.02) in iteration 1 and 70.40 (SD = 6.78) in iteration 2 (p > 0.05).

Conclusions Qualitative data from usability testing were valuable in the CDSS development process. SUS scores were of limited value at this development stage.

  • chronic pain
  • clinical decision support system (CDSS)
  • clinical practice guideline
  • electronic medical record
  • primary care

Commons license http://creativecommons.org/licenses/by/4.0/

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Clinical decision support systems (CDSSs) that are integrated into electronic medical records hold promise for facilitating better uptake of clinical practice guidelines in the practice setting. 1 Unfortunately, poor clinician usability has proven to be a barrier in the uptake and success of many health information technologies, including CDSSs.

Usability refers to ease of use of software technology and the user interface (UI).2 Attributes commonly described include learnability, efficiency, effectiveness, usefulness, accessibility and user satisfaction.2 Testing methods that include early involvement of current or potential users of the system and iterative evaluation may improve system usability.1,2

The objective of this small study was to engage end users in an iterative software development process to optimise the usability of a CDSS for chronic pain management. The purpose of the McMaster Pain Assistant (MPA) is to support inter-professional primary care management of chronic pain and standardise care by influencing clinical decisions for low back pain and neuropathic pain.

Methods

MPA CDSS

The Open Source Clinical Application Resource (OSCAR) electronic medical record is the platform for the MPA CDSS. The MPA is a suite of electronic forms that are integrated with the workspace, workflow and databases contained within the electronic medical record. The MPA includes ‘encounter guides’ that actively and passively assist a clinician through a patient visit for low back pain, neuropathic pain and opioid management based on clinical practice guideline recommendations. The MPA provides data fields for recording assessment information during the encounter; automatic filling of data fields where information exists elsewhere in the patient’s record; automatic extraction from data fields (such as scores from questionnaires) into a tracking table and graphic presentation; web links to educational materials for both clinicians and patients; and brief video presentations to assist clinicians with difficult discussions with patients.

When this study was initiated, all of these functions were available at either partial or full level of function. The initial development requirements of the MPA functionality were based on focus group needs, assessments of inter-professional end-user clinicians, as well as an expert panel consisting of e-health academics, programmers, pain clinicians and knowledge translation experts.

Sample and Setting

An academic family medicine centre with 2 clinics, 30 physicians, 8 nurse practitioners, 70 family medicine residents and a diverse allied health group. A convenience sample of potential users of the MPA was used and included family physicians, family medicine residents and nurse practitioners. The target sample size was 5–8 participants in each iteration, based on the literature that suggests that 80% of deficiencies will be found in samples of 4–5 participants.3

Data Collection

Two iterations of usability testing were performed, 2 months apart, with an identical format and different participants. A demonstration version of the CDSS was used for testing, and participants received no training in its use prior to working through a scenario. Three different patient case scenarios were provided: medication renewal, diagnosis of low back pain and monitoring of neuropathic pain. Each study participant was observed separately, and sessions were facilitated by one of the authors (RM) and observed in person or on camera by a team that included co-investigators, research staff, business analyst and the system software developer. Screencast video and audio were recorded and transcribed, and all the observers wrote field notes about what they observed and what the participant reported during the exercise.

The System Usability Survey (SUS) was completed by each participant immediately following the exercise. The SUS consists of 10 items, each scored on a 5-point Likert scale that ranges from 1 – strongly disagree to 5 – strongly agree and provides an overall score of system usability.3,4 Scoring involves subtracting 1 from all odd items, and subtracting all even numbered item responses from 5, which scales each item from 0 to 4. The total is multiplied by 2.5 to provide a score out of 100, which is interpreted as a percentile ranking and not as a percentage. SUS can also provide sub-scores for learnability and usability.

Data Analysis and Use in Development

For each iteration, all the observer notes and SUS scores were discussed by observers and then by the research team. Directed content analysis was used, and the number of barriers and facilitators of usability was determined and used to generate a priority list for system revision. The mean, SD and p-value between the two iterations were calculated for SUS scores. The MPA was modified following round 1 of testing and the revised version used in iteration 2. Further modifications followed the second round of testing.

Results

Participants

Seven clinicians (three family physicians, three nurse practitioners and one family medicine resident) participated in round 1 testing. Six new clinicians (three family physicians, two nurse practitioners and one family medicine resident) participated in round 2.

Qualitative Data

Transcribed audio recordings and observer field notes were analysed and resulted in the following categories of usability barriers and facilitators: system, UI, content and technical (see Table 1). Most identified barriers were related to the UI in both rounds, and this and technical were the areas that were modified most extensively between testing rounds. For example, titles of resources were identified as ambiguous in round 1. Between rounds, these were modified to be clearer, and round 2 demonstrated that this issue was largely resolved.

Table 1. Number of usability barriers of the MPA

Most identified facilitators were related to content, and many were related to UI. For example, participants felt that the added feature of a timeline that correlated medication to pain screening scores could help track changes in pain scores based on medications taken. Changes between round 1 and 2 testing were not analysed for statistical significance.

SUS Score

The overall SUS score decreased from 81.10 (±12.02) out of a possible 100 points in round 1, to 70.40 (±6.78) in round 2. This decrease in score was not statistically significant (p = 0.86; Table 2). Subscale scores were calculated for learnability (SUS items 1 and 4) with round 1 mean score 3.57 (SD = 1.72) and round 2 3.16 (SD = 1.34 and p = 0.78). Subscale scores were also calculated for usability (SUS remaining items) with round 1 mean score 3.16 (SD = 1.40) and round 2 2.73 (SD = 1.06 and p = 0.69).

Table 2. SUS scores in rounds 1 and 2

Discussion

Although well established within the software and application development arenas, usability testing is less commonplace within the health care context. Trafton et al.5 carried out a process similar to ours, testing the usability of a CDSS for opioid therapy for chronic, non-cancer pain by primary care providers. ‘Think aloud’ protocols in combination with SUS, interviews and log files were used to improve the system design and the UI.4 Similarly, this process was useful in the early development of the MPA CDSS.

Our testing involved a small number of two different groups of participants over two iterations. This meant that SUS scores, although reassuringly situated in the acceptable range by industry standards (i.e. above 68), were not statistically useful. In addition, applying the SUS to a broad and complex clinical task made it difficult to interpret the meaning of the score. Conventions of usability testing support our small sample,3 but we were aware that ‘outliers’ were influencing our results (i.e. participants with less computer experience versus generally comfortable participants).

This being said that the overall testing process was highly beneficial to design and development for several reasons. Testing occurred early in development allowing for changes in design elements. The qualitative ‘think aloud’ data provided specific critique and suggestions. Observers and data reviewers were from diverse roles enabling a broadly informed analysis of the findings. The software system developer was present and engaged in all the steps of the testing, supporting an ‘in the moment’ integration of user observation with ideas for modification, especially as related to UI and technical aspects. The overall process strongly influenced the ongoing system design, which continued following the second round of testing, and will be evaluated in a larger study of the final and fully implemented CDSS.

Acknowledgments

The authors would like to thank all the participants for their time and thoughts.

References

  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.

Footnotes

  • Competing Interests: None

  • Funding: This work was supported by the Department of Family Medicine, McMaster University, Hamilton, Canada, and the Lawson Health Research Institute, London, Ontario, Canada.