管家婆心水论坛

ACP Calls for More Balanced Approach to AI in Health Care

Advocate Masthead

In new position paper, recommendations are made for the ethical, scientific and clinical use of artificial intelligence

June 21, 2024 (ACP) -- While artificial intelligence (AI) can be transformative, it should augment, not replace, physician decision-making, according to a new position paper from the 管家婆心水论坛.

The paper, which appears in the Annals of Internal Medicine, highlights a path forward regarding AI's ethical, scientific and clinical use in health care.

"AI can play some critical roles in health care as long as there is human oversight," said position paper author Dr. Deepti Pandita, immediate past chair of the ACP Medical Informatics Committee and chief medical information officer at the University of California Irvine Health System.

Specifically, AI can reduce administrative burden by helping with coding and billing tasks, and AI-driven predictive analytics can aid clinical decision-making, Pandita noted. Other uses include AI-powered patient monitoring via wearables and remote tracking devices, and there is also a role for AI in medical education for curriculum development and content generation, she said.

"AI can process vast amounts of data quickly, providing timely insights that can improve patient care and diagnostic accuracy," Pandita said. In her own practice, she currently uses an AI-powered chatbot, ambient listening-driven documentation, AI-driven chart summarization, an AI assistant to augment patient queries, and AI-driven triaging and appointment scheduling.

But concerns remain in the use of AI in health care. "[We need to ensure the] safety, privacy and ethical use of these AI tools [so that] there are no issues with bias and data quality and that the data is interpretable by clinicians," Pandita said.

Specifically, the data should not be "black boxed." "Black box AI" refers to systems in which internal workings and decision-making processes are unclear or hidden from users, according to Pandita.

Certain guardrails need to be in place before there is a broader adoption of AI in health care, she noted. Specifically, ACP calls for a coordinated federal strategy involving oversight of AI by governmental and nongovernmental regulatory entities.

"Robust regulations must be in place to ensure the safety and efficacy of AI tools," Pandita recommended. "Strong measures are needed to protect patient data from breaches and misuse."

In addition, efforts must be made to identify and mitigate biases in AI algorithms to ensure equitable care.

"AI systems should be transparent, and their decision-making processes should be explainable to clinicians and patients," she said. "AI tools should be continuously monitored and validated in real-world settings to ensure they maintain their accuracy and reliability."

ACP also recommends that training be provided at all levels of medical education to ensure that physicians have the knowledge and understanding necessary to practice in AI-enabled health care systems.

"AI has already made an impact in the medical community, and ACP is excited about what it means for the future of health care," Dr. Isaac O. Opole, ACP president, said in a statement. "As we incorporate AI into medical practice, it is essential to maintain an awareness of the clinical and ethical implications of AI technology and its impacts on patient well-being."

More Information

The position paper, " is available on the Annals of Internal Medicine website.

Health Day Logo

Back to the June 21, 2024 issue of ACP Advocate