Although the use of artificial intelligence (AI) in healthcare includes the expected and typical ethical conundrums, AI also presents multiple new concerns that must be addressed, according to an article published in the AMA Journal of Ethics.1
Daniel Schiff, MS, and Jason Borenstein, PhD, director of the Center for Ethics and Technology, both from the Georgia Institute of Technology, Atlanta, delineated the challenges facing the use of AI in medical devices, focusing on the potential harms and the issues of informed consent and responsibility.
The authors prefaced their commentary with a brief case study involving the use of Mazor Robotics Renaissance Guidance System technology to assist in spinal surgery. They discussed a number of challenges specific to the use of AI technologies in healthcare settings, using the case to illustrate some of their points. Concerns included algorithmic bias, lack of transparency and intelligibility of AI systems, patient-clinician relationships, the potential dehumanization of healthcare, and loss of physician skills over time.
One key issue is that of informed consent. AI devices are complex, and the presentation of information may be complicated by patient or clinician fears, an excess of trust in technology, or confusion. Even those who are well-trained on such devices may not have a complete picture of the layers of complexity involved.
The lack of transparency in an AI system can hinder clinicians’ ability to understand how the system arrives at a decision and how an error might occur. Furthermore, competing views of an AI-controlled future, either a utopian view in which humanity’s problems are solved or a dystopian view of the extinction of the human race, may color patients’ acceptance of AI medical devices. In a 2016 survey of 12,000 people across different 12 countries, only 47% would allow a robot to perform minor, noninvasive surgery, and only 37% would allow a robot to perform major invasive surgery.2,3
Another crucial issue is that of responsibility when a medical error occurs. It is the problem of “many hands,” including coders and designers, medical device companies, healthcare providers, hospitals and healthcare systems, insurance companies, pharmaceutical companies, and medical schools. Each of these has important responsibilities to take steps to ensure safe, ethical use of AI systems.
The authors suggested that companies provide detailed information about AI systems to ensure that both clinicians and patients are well informed. For their part, clinicians should explain to patients the specific roles of healthcare professionals and of AI and robotic systems, as well as the risks and benefits of the systems. By doing so, they can improve the informed consent process. The authors also suggested that the healthcare community needs to collectively meet these goals by encouraging open discussion of new AI technologies and their integration into training and clinical practice.
References
- Schiff D, Borenstein J. How should clinicians communicate with patients about the role of artificially intelligent team members? AMA J Ethics. 2019;21:E138-E145.
- Müller VC, Bostrom N. Future progress in artificial intelligence: a survey of expert opinion. In: Müller VC, ed. Fundamental Issues of Artificial Intelligence. Berlin: Springer; 2016:553-571.
- PricewaterhouseCoopers. What doctor? Why AI and robotics will define new health. https://www.pwc.com/gx/en/industries/healthcare/publications/ai-robotics-new-health/ai-robotics-new-health.pdf. Updated June 2017. Accessed February 22, 2019.
This article originally appeared on Medical Bag