Ethical AI and the Future Of Healthcare: Combining Academic Theory And Industry Practice To Mitigate Patient Risk and Harm

Prof. Dillon Plummer

Unified Nursing Research, Midwifery & Women’s Health Journal
Author: Prof. Dillon Plummer
Affiliation: Capitol Technology University
Category: Abstract

Unified Citation Journals, 3(1) 5-6; https://doi.org/10.52402/Nursing2017
ISSN 2754-0944

Keywords: Artificial intelligence, AI Ethics, Healthcare Technology, Hospital Technology

Introduction:
This paper seeks to define a risk taxonomy, establish meaningful controls, and create a prospective harms model for AI risks in healthcare. Currently, there is no known comprehensive delimitation for AI risks, as applied in industry and society. The temptation for current research, both in academia and industry, is a tilt toward applying exclusively-tech-based solutions to these complex problems; this myopic view can be remedied by establishing effective controls informed by a holistic approach to risk management.

Sociotechnical Systems Theory (STS) is an attractive theoretical lens for this issue, because it prevents collapsing a multifaceted problem into a one-dimensional solution. Specifically, the multidisciplinary approach including the sciences and humanities—reveals a multidimensional view of technology-society interaction, and AI is a prime example. After creating this risk taxonomy, this paper utilizes the risk management framework of Lean Six Sigma (LSS) to propose effective mitigating controls for the identified risks, to suggest methods of controlling for these risks as best as possible. LSS determines controls through data collection and analysis, and supports data-driven decision making for industry professionals; it’s critical, then, to instantiate the theory of STS and the industry practice in order to determine and mitigate real world risks. Finally, this paper combines the academic theory of sociotechnical systems with the industry practice of Lean Six Sigma to develop a hybrid model to fill a gap in the literature. Drawing upon both theory and practice ensures a robust, informed risk model of AI use in healthcare.

Biography:
Dillon Plummer is an Adjunct Professor, PhD candidate, technologist, and consultant. His research interests
include the AI ethics and the application of AI in the healthcare and medical device industries. In addition to his work in academia, Professor Plummer also works in Quality Assurance in the medical device industry, where he advocates for increasing technology utilization and automation in the workplace.

Upcoming Conferences;

Exit mobile version