The various ways in which research use is conceptualized (
i.e., as a process or an outcome, as a general concept or as kinds -- instrumental, conceptual, persuasive, overall) coupled with the use of multiple instruments to assess nurses' use of research, challenges clinicians' and investigators' ability to directly compare findings from various studies to determine the extent to which nurses use research in clinical practice. In this review, by quantifying nurses' use of research as low, moderate-low, moderate-high, or high, we were able to indirectly compare the results of the 55 included articles and conclude that the extent to which nurses report using research in clinical practice is, on average, moderate-high (with 38 of the 55 articles reporting research use in the moderate-high range) (Table
2). Caution must be used when interpreting this finding, however, because we combined different instruments (and conceptualizations of research use) in reaching this conclusion.
Specific versus general research use
An examination of the extent of research use elicited by different instruments revealed little variation in the scores regardless of whether nurses were asked to report on their use of specific research-based practices (e.g., NPQ) or on their use of research generally (e.g., RUQ). Most articles that used the NPQ (n = 10 of 12) were ranked in the moderate-high research use category. Of the ten articles that used the RUQ, eight were classified in the moderate-high category. Limited variation in reported research use for the NPQ and the RUQ suggests that an instrument effect may be at play. As such, a propensity towards moderately high use of research may reflect either a regression to the mean effect, or a lack of sensitivity of the instruments to detect changes in research use over time.
Anecdotally we know that nurses find it challenging to respond to general questions about the extent to which they use research in practice. Importantly, such omnibus questions require nurses to first be aware that they are using research. If nurses are using research but are not aware they are, this type of question should lead to under-reporting of research utilization. However, in this review, nurses reported, on average, moderate-high use. Instruments containing questions that relate to the use of specific research evidence, on the other hand, provide a context for respondents and enable them to relate their responses to their work. This was illustrated in a recent international study with nursing service providers in Canada and Sweden [
100]. In this study, the investigators needed to provide concrete examples of research-based practices to stimulate nurses' reflection of their use of research in practice. This, in turn, however has the potential for increasing any social desirability effect that may exist.
Different kinds of research use
Six reports [
25,
64,
66,
70,
76,
84] in this review assessed the extent to which nurses reported different kinds of research use. Overall, use was highest for conceptual research use, followed by instrumental and persuasive use, with two exceptions [
64,
76]. The first exception is from a study of research use in Canadian nursing homes; Connor [
64] reported high persuasive research use (RNs mean = 6.07, LPNs mean = 5.27 on a 7-point scale), followed by instrumental and lastly conceptual research use. This may be explained however by the context (setting/work environment) in which the nurses in this study were employed. Nurses in Canadian long-term care facilities have a largely supervisory role in overseeing the practice of healthcare aides who provide the majority of direct care. Therefore, nurses in this setting are more likely to use research persuasively in order to convince direct care providers (healthcare aides) to provide research-based care.
The second exception can be seen in a study conducted by Milner and colleagues [
76]. In this study, Milner found that staff nurses and advanced practice nurses (educators and managers) both reported similar patterns (high conceptual followed by instrumental and persuasive use) in the extent to which they use research in clinical practice. However, the extent of use (for all kinds of research) reported by advanced practice nurses was higher when compared to staff nurses. Similar findings were also noted by Veeramah [
9] in a study of graduate nurses and midwives in the United Kingdom. Veeramah [
9] found that 67% of nurses reported a high extent of research use. The majority (63%) of nurses in this study, however, occupied senior positions with varying degrees of managerial responsibilities, autonomy, and authority, which may have been responsible for the higher extent of research use reported; nurses in management roles have greater authority to use research to implement change. These findings with respect to role are consistent with past research. While Estabrooks and colleagues [
38] and Meijers and colleagues [
39] located too few studies investigating role to reach a conclusion on its effect on research use, they did find consistent findings in the studies they located, with nurses in leadership roles reporting higher research use compared to staff nurses.
The state of the science when examining extent of research use
This review describes the range of measures of research use that have been used with nurses. It paints a somewhat discouraging portrait. Although the use of research evidence to underpin practice is viewed as fundamentally important, this review demonstrates several major limitations in this area of the field.
The first major limitation relates to methodological quality. Few studies examining nurses' use of research were strong (or even moderately strong) methodologically, illustrating a need for better-designed studies. Of the 55 articles included in this review, 51 reported a cross-sectional design. This design enables researchers to capture nurses' perceptions of their use of research at a single point in time. However, restricting study to cross-sectional research limits advances in the field. For example, evidence for causal inferences that can be used to develop interventions to increase nurses' use of research and consequently improve patient care is limited with cross-sectional designs. Four studies included in this review used a quasi-experimental design. All four studies measured research use pre- and post-implementation of an intervention designed to improve nurses' use of research in practice. Three of the studies implemented a control or comparison group alongside the experimental group, but reported little consideration of confounding variables, limiting the internal validity of the studies. Future studies in the field need to use more robust quasi-experimental and experimental (e.g., pragmatic RCTs) designs that take into consideration, and control for, threats to internal validity.
The second major limitation relates to the measures in use; these measures have several problems. First, there is inconsistency in the measures used, including widely varying use of language. While we were able to develop a method with which to compare findings on the extent of research use by dividing research use scores into quartiles, the lack of standard language makes it difficult to compare, contrast, and evaluate findings collected with the various instruments. Second, with the exception of the NPQ, none of the research use measures identified were developed explicitly using a relevant theoretical framework. As well, none of the studies examined reported the use of measurement theory (
i.e., classical test score theory or item response theory [
101,
102]) in the design or evaluation of the instrument. Finally, all of the instruments used self-report measures of research use. The advantages of self-report are well known. Whether done using paper and pencil, online, or computer-assisted telephone or personal interview, it has the benefits of cost efficiency, convenience, and time efficiency for researchers. Despite these advantages, self-report measures are also often criticized. They are reports of 'perception' and therefore, not 'objective' measures. With respect to the measurement of research use, self-report instruments are further criticized because of an inability to clarify items and thus what is meant by 'research,' an inability to probe to more fully understand what nurses mean when they report research use (or non-use), and reduced ability by nurses to recall how often they use research. The most frequent criticisms however are that such measures offer the potential for social desirability bias [
103] and rely on nurses' ability to recognize that they are using research. One way to reduce social desirability bias is to pay careful attention to instrument design (
e.g., attention to item wording, item order, response options, and pre-testing) [
103‐
105]. To this end, and positively, some of the research use measures identified in this review, while reliant on self-report measurement, have undergone extensive feasibility and pre-testing [
24,
25,
27]. Further, if social desirability bias were an issue for the studies identified, we would expect to see an increase in the extent of nurses reported research use in recent years, given the current drive towards evidence-based nursing practice, of which research use is one component. Instead, extent scores have remained relatively consistent over time into the early 2000's, when the shift towards evidence-based practice emerged. We recommend that future studies be conducted that: examine the scores obtained from the self-report research use instruments identified in this review along side other forms of assessment (
e.g., chart audit, think-aloud, observation), and attempt to causally associate research use scores with practice improvements and/or improved patient outcomes. Such studies will not only assess the validity of the research use scores obtained with these instruments and concerns with using self-report measurement, but also significantly advance the field. Until we have accurate and reliable measures of research use, it will not be possible to know, with any degree of certainty, whether intervention efforts are increasing nurses' use of research. Thus, in order to progress the field, robust measures of research use are critical.
Third, is conceptualization (theory) and resulting operationalization (the scales and scoring methods) of current research utilization measures. The published literature is characterized by multiple conceptualizations of research utilization. This influences how we define research utilization and, importantly, how we measure the construct and interpret the scores obtained from such measurement. Two conceptualizations dominating the field are: research utilization as a process (
i.e., consists of a series of stages/steps) and research utilization as a variable or discrete event (also referred to as the 'variance' approach). Both types of measures were evident in this review. Assigning meaning to the scales used, however, regardless of whether the measure follows a process or variable conceptualization, in many cases remains unclear. For example, scores (called Total Innovation Adoption Scores) ranging from 0 to 4 are theoretically possible (and have been reported) in studies using the NPQ [
26] (a process measure) to assess research utilization. A stage of adoption (as per Rogers' Innovation Decision Process Theory [
106,
107]) is then assigned to the resulting score: 0 to 0.49 (unaware), 0.5 to 1.49 (aware), 1.5 to 2.49 (persuasion), 2.5 to 3.49 (use sometimes), and 3.5 to 4.0 (use always). Using this schematic, a research utilization score of '1' is feasible. This score is interpreted as the respondent is aware of the research findings and is 1 (on a 0 to 4 scale) with respect to 'using research.' What is unclear is how this is interpreted as 'using research' when no 'use' is actually occurring? While no one would disagree that awareness is desirable and in many cases, necessary, for research utilization to occur, awareness is not 'research use'
per se nor does it guarantee that research use will occur. In line with Rogers' Diffusion of Innovations theory (from which this scoring is stated to have been developed), an individual may be aware of the innovation (research findings) and still choose not to use it in practice if they are not persuaded of its effectiveness. This scoring method gives the impression research use is occurring when it is not, painting a more optimistic picture of research use than actually exists. Similar pictures are painted with variable measures of research utilization. For example, items in the RUQ [
63] are scored on a 5-point Likert agreement scale (1 - strongly disagree, 2 - disagree, 3 - neither agree nor disagree, 4 - agree, 5 - strongly agree). But it is unclear how to interpret these scores as quantitative measures of research use. For instance, an overall scale score of '2' is interpreted as the nurse is just below average with respect to their 'use' of research. However, a score of '2,' according to the scale descriptors, would imply the same nurse 'disagreed' with most of the statements about their use of research. As with the NPQ, we contend that these scores also paint a more optimistic picture than actually exists. Similar scaling issues can be found in the remaining research utilization measures. As a result of these scoring problems, we believe that the extent to which nurses use research in their practice that is portrayed in the literature (and by association, in our synthesis) is higher than what actually exits.
Fourth, although nurses' research use has been measured by various instruments and has been studied for nearly 40 years, no benchmarking has been done. We thus have no 'gold standard' against which to compare the findings of any studies measuring nurses' use of research. A standard measure or set of measures of research use would help in such an effort. Equally, if not more important, is work that enables researchers and decision makers to evaluate the effect of different levels of research use on patient outcomes.
Progress in this field depends on having robust measures of research use. Fundamental to achieving this, we believe, are: an understanding of the validity and reliability of the measures that have been used to date; instrument development work that focuses on strengthening measurement accuracy; development of benchmarks for research use; and investigating the impact of varying levels of research use on patient outcomes. To date, there has been little emphasis on examination of the effects of varying levels of research use on patient and other outcomes (e.g., system, provider). Despite a strongly held assumption that integrating research evidence into practice will improve patient outcomes, none of the 55 articles included in this review examined associations between research use and patient or provider outcomes. Therefore, we could not assess the effect of research use on patient outcomes.