Categories
Health Law Highlights

Wade’s Healthcare Privacy Advisor for December 11, 2024

AI Governance

  • At the HLTH health innovation conference, a panel of AI experts expressed skepticism about appointing a chief AI officer in health organizations, advocating instead for improving AI literacy across the board. Some providers have established an AI oversight committee and an AI Enablement Center to democratize AI governance and ensure responsible integration of AI technologies. The widespread use of AI in radiology for diagnostic support and the growing adoption of ambient AI scribes have significantly reduced administrative burdens for physicians. The use of AI in administrative tasks, such as drafting patient communications, has shown positive results, with patients reportedly preferring AI-generated responses for their empathetic tone. Nevertheless, it is important to maintain a human element in AI applications, ensuring that AI supports rather than replaces clinical decision-making.
  • The FDA has issued final guidance on regulating changes to AI-enabled medical devices through pre-determined change control plans (PCCPs), allowing for post-market modifications while maintaining safety and effectiveness. PCCPs, first introduced in 2019, enable performance enhancements by outlining specific, verifiable modifications and include a description of planned changes, a modification protocol, and an impact assessment. The guidance, consistent with a 2023 draft, now includes a section on version control and maintenance. While no adaptive AI-enabled devices have been authorized yet, PCCPs have been approved for devices through various regulatory pathways. Modifications under a PCCP must stay within the device’s intended use, and significant changes, such as altering a device’s user base or core functionalities, require new marketing submissions.
  • The rapid evolution of AI in healthcare presents challenges for physicians and legal compliance, with shifting regulations and emerging laws at both federal and state levels. A federal rule effective July 2024 requires healthcare providers to comply with anti-discrimination regulations by May 2025, while various state bills focus on transparency, bias elimination, and AI limitations. Organizations like HIMSS and the AMA provide guidance on AI implementation, emphasizing human oversight and ethical considerations to enhance patient care and reduce costs. Legal risks associated with AI, such as data privacy, potential bias, and the unlicensed practice of medicine, necessitate legal expertise for healthcare providers. Despite these challenges, AI has the potential to generate actionable insights and improve healthcare operations, provided it is used responsibly and with appropriate legal guidance.
  • A recent study published in npj Digital Medicine outlines comprehensive guidelines for the responsible integration of AI into healthcare, developed by a team from Harvard Medical School and the Mass General Brigham AI Governance Committee. The study emphasizes nine principles, including fairness, robustness, and accountability, and highlights the need for diverse training datasets and regular equity evaluations to reduce bias. A pilot study and shadow deployment were conducted to assess AI systems, focusing on privacy, security, and usability in clinical workflows. The study also stresses the importance of transparent communication regarding AI systems’ FDA status and a risk-based monitoring approach. Future efforts will expand testing to ensure AI systems remain equitable and effective across diverse healthcare settings.

Medical Judgment

Employment

Data Privacy

HIPAA Penalties

Quantum Computing