Skip to the contentAI Governance
- At the HLTH health innovation conference, a panel of AI experts expressed skepticism about appointing a chief AI officer in health organizations, advocating instead for improving AI literacy across the board. Some providers have established an AI oversight committee and an AI Enablement Center to democratize AI governance and ensure responsible integration of AI technologies. The widespread use of AI in radiology for diagnostic support and the growing adoption of ambient AI scribes have significantly reduced administrative burdens for physicians. The use of AI in administrative tasks, such as drafting patient communications, has shown positive results, with patients reportedly preferring AI-generated responses for their empathetic tone. Nevertheless, it is important to maintain a human element in AI applications, ensuring that AI supports rather than replaces clinical decision-making.
- The FDA has issued final guidance on regulating changes to AI-enabled medical devices through pre-determined change control plans (PCCPs), allowing for post-market modifications while maintaining safety and effectiveness. PCCPs, first introduced in 2019, enable performance enhancements by outlining specific, verifiable modifications and include a description of planned changes, a modification protocol, and an impact assessment. The guidance, consistent with a 2023 draft, now includes a section on version control and maintenance. While no adaptive AI-enabled devices have been authorized yet, PCCPs have been approved for devices through various regulatory pathways. Modifications under a PCCP must stay within the device’s intended use, and significant changes, such as altering a device’s user base or core functionalities, require new marketing submissions.
- The rapid evolution of AI in healthcare presents challenges for physicians and legal compliance, with shifting regulations and emerging laws at both federal and state levels. A federal rule effective July 2024 requires healthcare providers to comply with anti-discrimination regulations by May 2025, while various state bills focus on transparency, bias elimination, and AI limitations. Organizations like HIMSS and the AMA provide guidance on AI implementation, emphasizing human oversight and ethical considerations to enhance patient care and reduce costs. Legal risks associated with AI, such as data privacy, potential bias, and the unlicensed practice of medicine, necessitate legal expertise for healthcare providers. Despite these challenges, AI has the potential to generate actionable insights and improve healthcare operations, provided it is used responsibly and with appropriate legal guidance.
- A recent study published in npj Digital Medicine outlines comprehensive guidelines for the responsible integration of AI into healthcare, developed by a team from Harvard Medical School and the Mass General Brigham AI Governance Committee. The study emphasizes nine principles, including fairness, robustness, and accountability, and highlights the need for diverse training datasets and regular equity evaluations to reduce bias. A pilot study and shadow deployment were conducted to assess AI systems, focusing on privacy, security, and usability in clinical workflows. The study also stresses the importance of transparent communication regarding AI systems’ FDA status and a risk-based monitoring approach. Future efforts will expand testing to ensure AI systems remain equitable and effective across diverse healthcare settings.
Medical Judgment
- A recent study at Beth Israel Medical Center revealed that generative AI tools outperformed physicians in diagnosing patients by nearly 20%, achieving around 90% accuracy. This study challenges the “fundamental theorem of informatics,” which posits that human-computer collaboration should surpass either working alone. Despite the potential of genAI in healthcare, there are concerns about biases in AI models, their impact on clinician skills, and patient data privacy. As AI technology advances, the industry must address these issues and ensure that clinicians are adequately trained to use and manage these tools effectively.
- Use of artificial intelligence (AI) in health care quality measurement can enhance the precision and efficiency of performance assessment. However, the use of AI in measurement also raises concerns about biases that could perpetuate disparities and affect vulnerable individuals. There have been recent national discussions that emphasize the need for ethical, transparent, and equitable AI applications in health care quality measurement, such as the US Centers for Medicare & Medicaid Services’ (CMS) information session titled “AI in Quality Measurement” and the Biden-Harris Administration’s Executive Order on AI. Addressing bias is crucial to ensuring that AI tools do not exacerbate existing inequalities, but instead contribute to fair quality assessment and high-quality outcomes.
- Artificial intelligence, particularly large language models like ChatGPT, is increasingly used in healthcare for tasks such as answering patient questions and predicting diseases. A study by Ben-Gurion University researchers evaluated the performance of these models in understanding medical information, revealing that most models, even those trained on medical data, performed poorly, akin to random guessing. ChatGPT-4, however, showed better performance with an average accuracy of about 60%, though still not fully satisfactory. The research involved generating over 800,000 questions to assess model capabilities in distinguishing between medical concepts. The findings emphasize the need for caution in using AI for medical purposes and highlight the importance of developing models with a broader understanding of clinical language.
Employment
- Job applicants often face the challenge of applying for ghost jobs, which are non-existent positions posted by companies to build talent pools or create an illusion of growth. A 2024 survey found that 81% of recruiters have posted ghost jobs, and 30% of companies have done so this year. This practice raises ethical and data privacy concerns, particularly under the EU’s GDPR and California’s CCPA, which require transparency and proper notice of data collection purposes. Ghost jobs may violate these regulations, leading to potential fines and reputational damage for companies. Applicants can protect themselves by recognizing signs of ghost jobs and understanding their data privacy rights.
- Companies have increasingly adopted AI tools for hiring and employment decisions, raising concerns about bias and mistakes. A California Privacy Protection Agency meeting debated proposed rules to regulate AI in employment, emphasizing worker rights and transparency. Various U.S. states have enacted laws to manage AI in hiring, such as requiring consent for using AI in interviews or mandating audits to ensure non-bias. High-profile cases illustrate the potential for AI to discriminate, echoing past issues with automated credit decisions. Despite potential benefits, there is a call for transparency and governance in AI use, as candidates may avoid opportunities if AI is involved without clear policies.
Data Privacy
- According to a new survey, data privacy remains a top challenge for one-third (33%) of healthcare professionals across the seven major markets when integrating AI into clinical practice.
- GoodRx, a telemedicine platform provider, has agreed to settle a class action lawsuit for $25 million due to its use of tracking technologies that disclosed website visitor data to third parties without user consent. The Federal Trade Commission (FTC) found that GoodRx violated the FTC Act and the Health Breach Notification Rule by sharing sensitive user data without consent, leading to a separate $1.5 million settlement with the FTC. The consolidated lawsuit, Jane Doe et al. v. GoodRx Holdings, Inc., et al., includes claims of privacy invasion and violations of various California and New York laws, with Meta, Google, and Criteo also named as co-defendants. The Court is set to rule on the $25 million settlement, which, if approved, will allow affected individuals to file claims for compensation from the settlement fund. The plaintiffs’ attorneys are seeking $8.33 million, or one-third of the settlement, for fees and costs.
- The Federal Trade Commission (FTC) has taken action against Gravy Analytics Inc. and its subsidiary Venntel Inc. for allegedly violating the FTC Act by collecting, using, and selling sensitive geolocation data without user consent. These companies are accused of unlawfully tracking and selling data related to visits to sensitive locations such as healthcare facilities, places of worship, and schools, potentially exposing consumers to privacy risks and discrimination. The FTC claims they collected over 17 billion signals daily from about a billion mobile devices, using precise geolocation data tied to unique Mobile Advertising IDs (MAIDs), which could identify individuals. The proposed FTC order requires the companies to delete all historical location data unless de-identified, notify third parties to do the same, and establish a program to prevent unauthorized use of sensitive location data. This settlement aims to protect consumer privacy and prevent misuse of sensitive geolocation information.
HIPAA Penalties
- The U.S. Department of Health and Human Services (HHS), Office for Civil Rights (OCR) imposed a $548,265 civil monetary penalty on Children’s Hospital Colorado for violations of the HIPAA Privacy and Security Rules following breaches reported in 2017 and 2020 due to phishing attacks. The breaches compromised the protected health information (PHI) of 3,370 and 10,840 individuals, respectively, and were partly due to disabled multi-factor authentication and unauthorized email access by third parties. OCR found additional violations for failure to train staff on HIPAA Privacy Rules and conduct a proper risk analysis of electronic PHI (ePHI). In June 2024, Children’s Hospital Colorado waived its right to a hearing, leading OCR to finalize the penalty. OCR recommends that covered entities implement robust cybersecurity measures, including multi-factor authentication, encryption, regular risk analyses, and workforce training to prevent such breaches.
- The U.S. Department of Health and Human Services Office for Civil Rights (OCR) fined Gulf Coast Pain Consultants, LLC, $1.19 million for multiple HIPAA Security Rule violations, including failing to terminate a former contractor’s access to systems containing electronic protected health information (ePHI). The contractor, who had ceased providing services in August 2018, accessed ePHI of approximately 34,310 individuals without authorization and generated around 6,500 false Medicare claims. Gulf Coast Pain Consultants failed to conduct a HIPAA-compliant risk analysis until September 30, 2022, and did not implement necessary policies and procedures for access termination and activity review until April 2020. The penalty is part of OCR’s 14th HIPAA enforcement action in 2024 and highlights the importance of proactive cybersecurity measures. Despite providing evidence of mitigating factors, Gulf Coast Pain Consultants could not reach an informal settlement with OCR.
Quantum Computing
- The convergence of quantum technology and artificial intelligence in precision medicine is set to revolutionize healthcare by enabling highly personalized treatments and advancing drug design, medical imaging, and real-time health monitoring. Second-generation quantum technologies, which integrate quantum and classical computing, offer significant advantages in computing, sensing, and networking, with applications ranging from drug discovery to secure patient data sharing. However, these advancements come with regulatory challenges, as existing frameworks may not adequately address the unique risks associated with quantum devices, necessitating the development of new evaluation protocols, risk management frameworks, and clinical trial guidelines. Policymakers are encouraged to promote quantum literacy, anticipate societal impacts, and implement adaptive regulations to balance innovation with public safety. Ultimately, global collaboration and harmonized standards are essential to harnessing the potential of quantum technology in healthcare responsibly.