Often lost in discussions about AI’s unfolding impact on healthcare is its uncertain effect on patient-physician relationships. The authors of an opinion piece published July 15 in JAMA take up a key question underlying this lack:
How can patient-physician trust be maintained or even improved?
In introducing their material, Robert Wachter, MD, of UC-San Francisco, author of the 2017 bestseller The Digital Doctor, and colleagues note that healthcare AI applications loosely break into those used by physicians (clinical decision support, quality improvement), patients (self-diagnosis, condition management) and the data-management experts who develop the applications.
Each of these applications, they reason, has the potential to enable and/or disable three key components of trust:
1. Competency. For physicians, this attribute refers to demonstrated and communicated levels of clinical mastery. For patients, it’s reflected in the degree to which they show their capacity to understand their own health status, the authors suggest.
“Because much of AI is and will be used to augment the abilities of physicians, there is potential to increase physician competency and enable patient-physician trust,” they write. “On the other hand, trust will be compromised by AI that is inaccurate, biased or reflective of poor-quality practices as well as AI that lacks explainability and inappropriately conflicts with physician judgment and patient autonomy.”
2. Motive. Does the patient believe the physician is acting solely in the patient’s interest? Does the physician feel the patient is self-informing to collaborate on care, not just to show who’s boss?
“Through greater automation of low-value tasks, such as clinical documentation, it is possible that AI will free up physicians to identify patients’ goals, barriers and beliefs, and counsel them about their decisions and choices, thereby increasing trust,” Wachter et al. write. “Conversely, AI could automate more of the physician’s workflow but then fill freed-up time with more patients with clinical issues that are more cognitively or emotionally complex.”
3. Transparency. Patients are reassured when AI tools help them see that clinical decisions are being made on evidence and expert consensus. Physicians can do their best for patients when patients are forthcoming with all clinically pertinent information.
“[I]f patient data are routinely shared with external entities for AI development, patients may become less transparent about divulging their information to physicians, and physicians may be more reluctant to acknowledge their own uncertainties,” the authors write. “AI that does not explain the source or nature of its recommendations (“black box”) may also erode trust.”
Healthcare AI is sure to reshape relationships between physicians and patients, the authors conclude, but “it need not automatically erode trust between them. By reaffirming the foundational importance of trust to health outcomes and engaging in deliberate system transformation, the benefits of AI could be realized while strengthening patient-physician relationships.”