Introduction
Artificial intelligence (AI) models for detecting significant prostate cancer (PCa) on magnetic resonance imaging (MRI) scans have reached performance levels comparable to expert radiologists, showcasing their potential for integration into clinical settings [
1‐
4]. However, successful clinical integration of these AI systems requires more than just diagnostic accuracy; trust in AI diagnosis and accountability for misdiagnosis have been appointed as key challenges for the future of the field [
5,
6].
Trust in AI diagnosis of PCa suspicious patients is understudied. Although there are existing studies on how patients and radiologists perceive AI in radiology, those findings might not directly apply to PCa patients [
7‐
9]. Patient perspectives on AI implementation can differ based on settings and specific applications [
8], warranting an evaluation of the unique PCa patient population that generally consists of older males. In prostate MRI assessment, the standard of care is diagnostic assessment by a single radiologist. The current landscape of AI models presents a variation in potential support for the radiologist. Support can range from models designed to have a second look to those striving for full autonomy [
10,
11]. The understudied patient responses to these innovations and their acceptance of different levels of automation have become increasingly relevant. Therefore, weaving patient perspectives into the development and deployment narrative is essential. This ensures that the AI tools developed meet the technical standards and resonate with and garner trust from the patients they are designed to serve.
The future implementation of AI in PCa detection also raises questions regarding responsibility for misdiagnosis [
12]. The utilization of AI for PCa detection on MRI deviates from the standard of care, placing medical doctors in a position of responsibility for any mistakes the AI makes [
5]. This current responsibility of doctors when deviating from the standard of care may hinder the implementation of novel AI systems. Therefore, investigating the patients’ views on responsibility is paramount, as it indicates who could be held accountable when an AI system delivers an incorrect diagnosis.
This study investigates patients’ acceptance of AI for diagnosing PCa on MRI scans and the factors influencing their trust in AI diagnoses. In addition, it examines patients’ views on accountability for misdiagnosis by AI.
Discussion
This study delved into patients’ perspectives on using AI for MRI-based PCa diagnoses and the factors shaping their trust in such AI-involved diagnoses. Results indicated a pronounced preference for AI to aid a radiologist’s judgment, with 79% of patients supporting a computer-generated second opinion after a radiologist’s initial diagnosis, and 91% endorsing an AI’s primary opinion subject to a radiologist’s oversight. Despite this, only a minority of 15% favored a standalone highly certain AI diagnosis without a radiologist’s input. Nevertheless, most patients (52%) were open to fully autonomous AI, provided it surpassed a radiologist in diagnostic accuracy. The study further emphasized that in instances of misdiagnosis, participants deemed the hospital, radiologist, and AI program developer accountable, in descending order of accountability.
Overall, our results underscore that a vast majority of patients undergoing prostate MRI are receptive to AI involvement (79–91%), either as a secondary or primary evaluator. This aligns with current AI capabilities and available commercial products. Some studies and products utilize AI to read prostate MRIs initially and propose findings for radiologist approval, giving radiologists discretion to accept or discard results [
17]. While there is general agreement on some level of AI involvement, the acceptance drops to 15% for standalone AI systems with high certainty, absent radiologist involvement. However, if AI were to outperform radiologists, the majority would likely accept it, suggesting a potential shift toward increased automation by AI. Although many radiology algorithms are not yet ready for this advanced level, ongoing research suggests it is feasible for specific algorithms [
11]. Educating patients about the benefits of the current healthcare system (e.g., increased diagnostic performance and productivity gain) might further persuade them toward this approach. Notably, 33% of the participants neither agree nor disagree with automated AI reading when the AI outperforms radiologists, indicating a potential target for additional education and information about AI technology.
This study also investigated patients’ views on accountability in AI system performance. The hospital, radiologist, and AI program developer were identified as responsible parties. Currently, medical doctors are responsible for patient care outcomes, facing liability for deviations from standard protocols [
5]. As standard care increasingly integrates medical AI, this accountability landscape may shift, with AI developers playing a more influential role in patient outcomes and facing greater legal responsibility [
5,
6,
18]. However, each setting and workflow in healthcare has unique specifics that influence how existing laws must be adapted. Future collaborative efforts among hospitals, radiologists, AI developers, and legal experts are essential to tailor these adjustments appropriately.
While existing literature provides insights into patients’ perspectives on AI, its applicability to patients undergoing a prostate MRI might be limited. This limitation arises due to potential variations in views influenced by diverse settings and applications, as highlighted in previous studies [
8,
9]. Our study was specifically designed to address this gap by focusing solely on patients undergoing prostate MRI for the diagnosis or staging of prostate cancer (PCa). The questionnaire was administered prior to the MRI to replicate the clinical environment and account for the anxiety associated with awaiting a diagnosis. Although another study has investigated patient trust in AI among patients undergoing prostate MRI, direct comparisons are challenging because their study encompassed all diagnostic and therapeutic interventions for PCa, with 58% of patients visiting for radical prostatectomy [
19]. Their findings indicated a slightly reduced preference for AI involvement in the diagnosis (67% AI-assisted diagnosis and 1% AI alone diagnosis), without a correlation between education and trust in AI. Besides a different patient cohort, the differences might be attributed to the double focus of their questionnaire on diagnosis and communication capabilities of AI: patients do not prefer AI communication but do prefer AI involvement in the diagnosis [
19].
The role of radiologists in patients’ trust in AI-assisted PCa diagnosis was also highlighted in a focus group study [
20]. Their results showed that participants’ trust depended not only on AI technology but also on the radiologists, whom they trusted to utilize thoroughly tested, beneficial tools. In addition, the preference for a human-centered approach was also expressed in terms of the importance of patient-professional relationships, including empathy in communication. While AI can aid in the detection of PCa, a human-centered approach was preferred to provide a balance between diagnostic accuracy and human intuition [
20].
When comparing our study to the broader literature on AI in healthcare, our study identified relatively large support for a standalone AI system, particularly if it demonstrated superior performance compared to a radiologist. This contrast might be attributed to the unique perspectives of the patient cohort undergoing MRI for PCa diagnosis or staging [
8]. For instance, a study conducted in the radiology department in 2021, which focused on women’s acceptance of AI in mammogram interpretation, found that 46% of participants preferred AI as a secondary reader [
14]. However, our results indicate a higher acceptance rate, with 80% of participants favoring a computer program’s second opinion after a radiologist’s initial diagnosis in PCa detection. This discrepancy may suggest gender-based differences in perceptions of AI in healthcare [
8]. Furthermore, the growing acceptance of AI could also be influenced by the rapid development and increasing visibility of AI algorithms, such as large language models like ChatGPT, which may have enhanced public awareness and trust in AI capabilities over time. Therefore, conducting targeted studies that directly address specific audiences and scenarios is crucial. Such studies will help to accurately delineate the real boundaries and acceptability of AI algorithms in various medical contexts [
14].
This study encountered limitations, primarily due to its research focus on Western European male patients with higher education levels. As such, the generalizability of our results may be limited, given that female patients or other populations may hold different perspectives [
8,
20,
21]. Additionally, this study represents a snapshot of the continuum of ongoing research. Future surveys need to evolve alongside the changing landscape of technologies, reflecting various stages of performance and automation. As a follow-up study, it would be intriguing to explore patients’ familiarity with AI, such as ChatGPT, and then categorize their AI acceptance based on their AI familiarity.
With PCa detection AI reaching expert-level performance, the vast majority of patients showed a preference for AI involvement alongside a radiologist in diagnosing PCa on MRI. Autonomous AI was accepted by a small majority on the condition that AI outperforms a radiologist. Moreover, higher education levels were linked to increased trust in AI. Respondents held the hospital, radiologist, and program developers accountable for misdiagnosis, in descending order of accountability.
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.