Categories
Health Law Highlights

Wade’s Healthcare Privacy Advisor for November 27, 2024

Cybersecurity

  • The Office of Inspector General (OIG) has once again found the U.S. Department of Health and Human Services’ (HHS) information security program to be ineffective, as detailed in their report. The OIG’s annual audit, required by the Federal Information Security Modernization Act of 2014, revealed that HHS failed to meet maturity in all five functional areas of the NIST framework: Identify, Protect, Detect, Respond, and Recover. The OIG made six recommendations to improve HHS’s information security, including updating system inventories and implementing a comprehensive cybersecurity risk management strategy. Despite these recommendations, HHS only concurred with five, disagreeing on the need to fully implement a new cybersecurity risk management strategy. This audit exemplifies ongoing challenges faced by federal agencies in meeting FISMA requirements, with HHS struggling to address security flaws, particularly in its cloud systems.
  • The ransomware landscape has become more distributed, with a rise in small-scale groups and a decrease in activity from previously dominant groups like LockBit and ALPHV. Poorly secured and outdated VPNs remain a primary initial access vector for ransomware groups, highlighting the critical importance of robust security measures like multi-factor authentication.
  • Telehealth programs are increasingly targeted by cybercriminals due to their rapid expansion and critical role in patient care. Healthcare organizations can protect sensitive health data by conducting risk assessments of telehealth providers, implementing isolated network access points, and continuously monitoring security measures. Hospitals and health systems should integrate telehealth provider security into their overall strategy, ensuring compliance with industry standards like HIPAA and HITRUST.
  • The IEEE Standards Association has published IEEE 2933, a new healthcare cybersecurity standard addressing vulnerabilities in connected medical devices. IEEE 2933, developed with input from global experts, focuses on six essential elements of medical device security, including trust, identity, privacy, protection, safety, and security. By adopting IEEE 2933, the healthcare industry can take a proactive stance in safeguarding patient safety and system integrity.

Data Privacy

  • Elon Musk has been criticized for encouraging users of X, the platform he owns, to upload medical images to its AI tool, Grok, raising concerns about privacy and accuracy issues. Musk claims Grok is in early stages but already quite accurate, though results have been mixed, with some users reporting accurate diagnoses and others experiencing errors. Critics highlight the absence of HIPAA protections on X and ethical concerns about sharing sensitive health data on social media. The New York Times and experts like Bradley Malin emphasize the risks involved, including potential misuse of data and public trust issues. The debate underscores the need for regulation in AI-driven healthcare to prevent misuse and ensure safety.
  • The U.S. Department of Health and Human Services, Office for Civil Rights (OCR) has announced a new enforcement initiative called the Risk Analysis Initiative, aimed at ensuring compliance with the HIPAA Security Rule Risk Analysis provision. This initiative is part of OCR’s broader efforts, including its seventh enforcement action related to ransomware, to address deficiencies in how organizations assess risks to electronic protected health information (ePHI). With a reported 264% increase in large breaches involving ransomware since 2018, the initiative emphasizes the need for healthcare entities to evaluate their cybersecurity measures and resource allocation. OCR’s focus is on enhancing the identification and remediation of threats to ePHI, a critical aspect of HIPAA compliance. This initiative follows OCR’s previous enforcement strategy, the Right of Access Initiative, suggesting a continued rigorous approach to ensuring compliance.

Artificial Intelligence

  • In a randomized clinical trial published in JAMA Network Open, it was found that the use of a large language model (LLM) did not significantly enhance diagnostic reasoning performance among physicians compared to conventional resources. The study involved 50 physicians and showed that while the LLM alone outperformed both groups of physicians, its integration with physicians did not improve diagnostic reasoning. The trial highlighted the need for further development in human-computer interactions to effectively integrate LLMs into clinical practice. Despite the LLM’s potential, the study suggests that simply providing access to LLMs is insufficient to improve diagnostic reasoning in practice.
  • Public Citizen experts are urging the U.S. Food and Drug Administration (FDA) to address the risks posed by AI in healthcare, which could worsen existing issues and threaten patient safety. Dr. Robert Steinbrook, Health Research Group Director, testified before the FDA’s Digital Health Advisory Committee, emphasizing the need for stringent regulations to prevent harm from rapidly developed AI devices. A report by Eagan Kemp highlights the growing use of AI in administrative tasks, medical practices, and mental health support, warning that without safeguards, AI could lead to inequitable care and exacerbate disparities. Public Citizen has recommended regulatory measures to the Department of Health and Human Services, expressing concern that the incoming Trump administration may prioritize innovation over regulation, potentially compromising patient safety.
  • AI tools, particularly GenAI, are being used to enhance healthcare by detecting health threats and unauthorized access to patient data, but they must be accurate and secure to be effective. The article warns that AI can also be exploited by cybercriminals to harm healthcare systems through social engineering and other malicious activities. It emphasizes the need for healthcare organizations to establish robust AI policies and risk management strategies to mitigate these threats. Finally, the article advises thorough testing of AI tools to ensure they do not compromise patient data or violate legal requirements.
  • Microsoft and major institutions like Yale, Harvard, and the University of Michigan are advancing AI initiatives, yet the technology’s adoption may be outpacing regulatory and oversight capabilities. The FDA currently approves AI tools as devices, which undergo a different and sometimes less rigorous approval process than drugs, raising concerns about their real-world efficacy and safety. The article emphasizes the need for transparency, stronger regulations, and a public database to track AI performance and ensure accountability. It also calls for increased resources for the FDA and suggests that patients and healthcare professionals should stay informed and engaged to promote responsible AI use in medicine.
  • Academic Medical Centers (AMCs) are uniquely positioned to accelerate the translation of research into clinical care, particularly through the use of artificial intelligence (AI). AMCs can leverage AI to improve patient care, especially in resource-constrained settings, and create efficiencies for providers and research organizations. Despite challenges, the potential rewards of AI implementation are significant.