Overview of findings
“Fracture risk” was identified early in the analysis as an organizing theme bridging several clusters of codes and spanning many topic areas (i.e., referral, report interpretation, and report communication). Under the umbrella of this theme, two major subthemes were developed: (1) questioning by family physicians of reported fracture risk assessments’ accuracy and (2) family physicians’ independent assumption of responsibility for risk assessment and its interpretation. Results that illustrate these themes were organized by the lead author (SA) and verified by some members of the research team (SA, SM, LC, and SJ).
Questioning by family physicians of reported fracture risk assessments’ accuracy
During interviews, the majority of family physicians indicated that they questioned the accuracy of the risk assessments on BMD reports. The specific manner in which the accuracy of fracture risk assessments was questioned, however, varied significantly. Three major subcategories were identified in an effort to capture this variation; these reflected questioning by family physicians of: (1) accuracy in raw bone mineral density measures (e.g., g/cm2); (2) accurate inclusion of modifying risk factors; and (3) the fracture risk assessment methodology employed.
Among physicians that questioned accuracy of raw bone mineral density measures, some identified technical factors as probable sources of error while others identified factors associated with interpretation of images with artifacts. Accuracy concerns related to technical factors were, in general, very broadly described by participants and related either to antiquated or ill-maintained equipment. As an example, Participant #7 had several patients call a particular scanning facility “dirty,” and added:
“…because it was a private [facility], I’m not really sure how old the scanner was. I mean, if patients complain the place isn’t clean, you kind of wonder about the equipment.”
Participant #19 also questioned the accuracy of raw BMD measures on reports, but implicated a probable source of error to lie in the interface between the BMD scanning machine and reports. This participant described scanning attachments to reports to verify raw BMD data:
“I do skim the actual absolute numbers because sometimes… you know, we’re all human. Sometimes they report [raw BMD]… incorrectly.”
Other participants, by contrast, identified compromising issues related to the clinical interpretation of raw BMD measures, particularly for patients with osteoarthritis or manifest degenerative change in the spine. As Participant #2 explained:
“I say ‘this patient has osteoarthritis, I don’t care what the report tells me’…. [On the report] the bone density is normal and people will say ‘but I’m normal!’ I say ‘no, but you fractured so you’re not normal’. This is my problem with bone density.”
Rather than commenting on inaccuracy due to artifacts, Participant #7 commented on inaccuracy due inappropriate choices of regions of interest:
“Some facilities… do not report the femoral neck T-score at all. They report total hip. If you don’t have your eyes open and look carefully, you could use the wrong T-score. I have to go through the subsequent pages and find femoral neck…”
A more substantial number of participants, however, focused their questioning around reports’ inclusion of modifying risk factor information. Participants pointed specifically to the fact that age constraints, fracture history, treatment status, or other clinical variables were often missing from the BMD reports they received. Participant #18 explained:
“Sometimes they’re wrong…. I’ve seen people not put in fractures [on reports] when I know that [the patient] had them.”
Similarly, Participant #13 noted missing risk factors on reports, as well as missing assessments on reports, where fracture risk was applicable:
“Risk factors are not incorporated into reports consistently…. If you look at the various reports, there are some whereby they’re still reporting just the T-scores.”
Opinions regarding why and how modifying risk factors might be incorrectly factored into assessments were various. Three participants explained that, when patients are asked about risk factors, the resulting information is generally low in quality. As an example of this, Participant #6 described a study in which a resident compared risk factors reported by patients on patient questionnaires to clinical data stored in patient charts:
“It was kind of interesting that even when you get patients to fill [the questionnaires], a lot of times they’re either incomplete or they’re actually not accurate.”
Several physicians, however, indicated that they felt specific clinicians played a role in inaccurate capture of modifying patient risk factors. For example, a few family physicians questioned technologists’ ability to gather accurate or relevant information about modifying risk factors.
“It really depends on who’s taking the [patient] history at the bone density reporting facility in terms of the consistency of the information that you’re going to get from the patient, because not all fractures are fragility fractures. You need a… technologist educated around fragility fractures and other risk factors.” (Participant #13)
Participant #18 also focused questioning around technologists’ ability to accurately collect patient information:
“[Technologists] don’t always have stuff about falls. Sometimes they’re wrong…. you know, someone’s forgotten they’ve had a fracture or they think they had a fracture when they actually didn’t or something like that. There’s a whole lot of different kinds of things that make you just kind of wonder if they really know the person… like, we probably know them better.”
Like Participant #18, many of the interviewed family physicians identified
themselves as better sources of risk factor information than the clinicians at scanning facilities. Participant #8, for example, posited that:
“[Reports] will have comments…. ‘this person just recently had a fracture’, because they asked through the history. But I know a little bit more.”
To address the information gap between the family physicians and the radiologists, Participant #19 described modifying the standard BMD requisitions in an effort to better communicate risk factors to reading radiologists:
“I provide whatever I think they need [on the requisition] instead of just circling high risk. My radiology colleagues and friends, they often go like ‘yeah we are blind’… because the radiologists don’t always have a chance to see the patient.”
A relatively small number of participants, by contrast, felt that modifying risk factor information assembled by BMD scanning facilities might actually be
more trustworthy than the same information, assembled by family doctors. The reasons for this revolved around the hectic pace of family doctors’ offices as well as the relative level of unfamiliarity on the part of many family physicians with relevant risk factors. As an example, Participant #21 noted, “if the office is crazy then in may be hard for us to look at all the risk factors.” Participant #11 explained that older family doctors, in particular, may not be familiar with all the modifying risk factors emphasized by recent clinical guidelines [
1]. This participant volunteered that, upon returning to practice after a hiatus:
“I had a conversation with my residents and my medical students about [recent guidelines] and… it wasn’t anything new to them. It was relatively new to me. I think people in my generation, the message isn’t getting out… I’m going to have to look [the new guidelines] up a few times [before] it’s going to be stuck in my brain.”
As illustrated above, questioning of reported risk accuracy related to raw BMD measures or missing modifying factors was extensively represented in the interview data. In addition, a relatively small but rich subcategory of questioning focused on fracture risk assessment methodologies. This encapsulated both questioning of the CAROC as a potentially incomplete or excessively abbreviated assessment tool and questioning of fracture risk assessments’ application to older patients with multiple, complex conditions. As an example, two participants (#7 and #3) expressed a basic skepticism of the CAROC assessment tool due to the fact that it incorporates a limited number of modifying risk factors relative to competing assessment tools, like the FRAX. As Participant #7 explained:
“The CAROC doesn’t incorporate the smoking or the alcohol or the parental hip fracture or whatever. It only incorporates glucocorticoids and a previous fracture…. They said it’s just as good [as the FRAX], but it’s not.”
Questioning of fracture assessment methodology also focused on assessment tools’ applicability to older, more “fragile” individuals and those with multiple conditions. Examples of this variety of questioning follow:
“My patient population is 75 plus. The amount of research that goes into just baseline figuring out what the prognosis is of someone at the age of 75, 80 or 85 is nonexistent. They’re excluded from almost every randomized controlled trial. So I’m not sure what business we have of prognosticating 10-year risks on people over the age of 80…. 10-year risks are ridiculous. Nobody knows what’s going to happen 10 years down the line.” (Participant #17)
“If the person is 85 and they’re likely not to live more than a few years, maybe it doesn’t matter if the risk is high over 10 years.” (Participant #15)
In summary, while questioning of the accuracy of the components used to produce risk assessments (i.e. raw measures of BMD and information about modifying factors) was pervasive, a small but vocal group of physicians also questioned the basic methodology used to arrive at 10-year fracture risk assessments. Some felt the CAROC, which is commonly used by radiologists in Canada, to be incomplete or inferior to the FRAX while others felt the 10-year horizon on risk assessments to be inapplicable to their older patients, in particular.
Family physicians’ independent assumption of responsibility for risk assessment and its interpretation
Many family physicians also described independently taking steps either to recompute reported assessments entirely or to verify components of fracture risk. While only a few physicians reported exhaustively recomputing fracture risk assessments, a relatively large number reported verifying elements like raw BMD measures or modifying factors. Others described recomputing assessments using tools like the FRAX, but only when practical constraints allowed.
As examples of physicians that regularly recomputed reported fracture risks in their entirety, Participants #3 and #7 both described deriving FRAX assessments using reported BMD. Participant #7, for example, reported scanning BMD reports to find femoral neck T-scores and extracting these to “put in [the] FRAX.” While Participant #7 acknowledged that this practice added substantial time to the reporting process, the participant strongly asserted the need for family physicians to dedicate this additional time in order to ensure reporting accuracy:
“[A fracture risk assessment] does take time to work out; I’m not going to pretend it doesn’t. But… we need to focus on the main priorities in medicine. When osteoporosis kills more people that so many other major diseases like strokes and heart attacks and even breast cancer, really there’s no excuse to say ‘I don’t have the time to do a FRAX or a CAROC’.”
Similarly, Participant #3 described regularly calculating FRAX assessments for patients using the BMD measures contained in reports. This participant stressed recomputation to be particularly important for patients labeled “at risk:”
“If [the report] is at moderate and also if it’s at high risk, I would still want to confirm it’s at high risk by doing a FRAX score…. I basically just go into it and do the calculation when I’m presented with the BMD.”
Like Participant #7, Participant #3 noted that the practice of recomputing assessments added time-consuming steps to the task of report interpretation, explaining: “I have to put the numbers in [to the FRAX calculator], get the calculation, print it and then scan it into the EMR.”
Participant #13 also described recomputation of risk as a routine:
“I don’t trust the actual summaries [on BMD reports] anymore. I still look at the T-scores. I still use the CAROC method for assessing risk.”
And like Participant #7, this participant emphasized the need for family physicians to assume assessment responsibility despite practical constraints:
“I liken [10-year fracture risk assessment] to the Framingham Risk Assessment that we’ve been doing for years. I don’t think anything of getting the cholesterol back and then punching in the other risk factors… and coming up with an assessment. Assessing 10-year fracture risk is really no different than that. Rather than a cholesterol value, you’ve for a T-score, you know.”
Other physicians, by contrast, did not describe consistently dedicating the time to completely recompute fracture risk assessments. Rather, a substantial number reported verifying components of reported risk assessments (generally modifying risk factors). For example, Participant #8 detailed the steps taken to ensure consistent inclusion of risk factors on reports:
“What I am doing now and I find it very helpful is I copy the 10-year risk factors, I throw them in the chart, and then every time I plot them as well. The first time [a patient gets a BMD] I’ll put all their risk factors, the minor and the major. [I] tick [risk factors] off and see on the next one… if it’s changed.”
Participants #5 and #10 similarly described consulting charts after receiving a BMD exam to validate modifying risk factors; as Participant #10 explained:
“I pull the chart. Sometimes I know the patient well and I sort of say, ok, I don’t remember anything like that… so I don’t bother looking. If I don’t know the patient well I pull [the chart] and start looking through it quickly, to make sure that there are no fractures that I can find.”
Participants #14 and #20, by contrast, explained that they would sometimes recalculate assessments, but relatively infrequently. As Participant #20 explained:
“I will plug things into the FRAX… but it takes a little time, so I usually don’t.”
This same participant expressed mixed feelings about assuming responsibility for risk assessment calculations, explaining: “I should [do the FRAX] because I know it’s there. I don’t know, there is no good reason why I’m not using it.”
In total, more than half of the participants indicated that they either recomputed reported risk assessments or took steps to verify risk assessments in some fashion. Those that made it a practice of routinely and exhaustively recomputing risk assessments acknowledged the process to be time-consuming, but most shared a feeling that dedication of the time was a responsibility. Other participants reported less time-consuming methods to verify components of assessments or to gauge overall accuracy. The sense of personal responsibility for ensuring assessment accuracy was less forcefully expressed in this group.
Those physicians that routinely recomputed assessments also tended to assume sole responsibility for arriving at treatment recommendations. Several in the group, in fact, described their ideal reports as devoid of treatment recommendations. Examples follow:
“For me, ideally, one page is fine. It means less scanning. Just the T-scores of the spine and the femoral neck and that’s it because I do my own fracture risk anyway.” (Participant # 7)
“The way the report is [right now], it doesn’t help me to make the decision about treatment… [but] I’m not saying it doesn’t help me. I need the T-score from it.” (Participant #3)
Participant #3, however, acknowledged that treatment recommendations were theoretically useful on reports for “at-risk” patients:
“What you should be sending me is ‘high risk should mean a patient should be treated’… moderate risk, help us out a bit, [say] ‘treat or don’t treat’, you know.”
Participants who described less exhaustive verification of risk assessments rather than overt recomputation, by contrast, expressed more mixed opinions as to the value of treatment recommendations on reports. Participant #19, for example, expressed appreciation for recommendations as follows:
“If they dictate a recommendation to start something based on what they see, it’s great… whether it’s a reminder or because they read so much they know the guidelines better or something. At least then it’s like a checkmark for us, too, so it sort of helps share the care.”
Participant #18, however, who described validating risk assessments, claimed that in treatment recommendations:
“… I see a lot of variations in what I think are really basically the same clinical scenario and I get different recommendations from two different people. So [the recommendations] don’t really make sense to me.”
Similarly, Participant #5, who described consulting charts to verifying risk, asked of reading radiologists:
“Don’t recommend anything to me. I’m the one who has to make the judgment as to if a patient should be treated and with what.”
In summary, participants that reported routinely recomputing risk assessments in their entirety and as a routine also described assuming responsibility for treatment recommendations; many of these same individuals explained that they relied on the BMD report for T-scores alone. Some, however, added that treatment recommendations would theoretically be of use on reports, particularly for patients categorized as ‘at risk’. By contrast, most participants that either verified parts of assessments or recomputed them only on occasion were less dismissive of treatment recommendations. While a few claimed not to value treatment recommendations, others described responsibility for treatment recommendations as shared between them and the reading radiologist. Many in this latter group explained that they would weigh radiologists’ treatment recommendations, even while they sometimes questioned the accuracy of radiologists’ overall assessments.