Categories
Health Law Highlights

Wade’s Healthcare Privacy Advisor for January 15, 2025

Confidentiality & Cybersecurity

  • The US Court of Appeals has struck down net neutrality regulations, allowing Internet Service Providers (ISPs) to monitor, prioritize, and control Internet traffic. The ruling impacts healthcare privacy as ISPs can now track and sell patient data from telehealth sessions, mental health searches, and digital health app usage to third parties. Healthcare providers must implement stronger privacy measures, including encrypted platforms, VPNs, and HIPAA-compliant systems to protect patient information. FCC Chair Jessica Rosenworcel has called for congressional action, while healthcare professionals are urged to advocate for patient privacy through policy engagement and partnerships with privacy organizations. The decision particularly affects rural patients who rely on telehealth services and raises concerns about potential discrimination based on health-related Internet activity.
  • A recent report reveals that 73% of healthcare organizations still use legacy systems, which creates security vulnerabilities that cybercriminals can exploit. Healthcare IT teams must build security measures into applications from the start, ensure flexibility across platforms, and implement vendor management strategies to protect data. The modernization process requires consideration of usability factors to prevent users from circumventing security controls, while features like Pure Storage’s SafeMode Snapshots provide protection against data breaches. Organizations that implement these strategies can better protect patient data, maintain productivity, and preserve patient trust.
  • The U.S. Department of Health and Human Services Office for Civil Rights has proposed major changes to the HIPAA Security Rule that would require healthcare organizations to implement stricter cybersecurity measures by mid-2025. The changes include mandatory encryption of protected health information, multi-factor authentication, vulnerability scanning every 6 months, penetration testing annually, and notification requirements within 24 hours for certain security events. HHS estimates first-year compliance costs at $9 billion, with subsequent annual costs of $6 billion through year five. The proposal comes in response to a 950% increase in individuals affected by healthcare data breaches since 2018, though its fate remains uncertain as it transitions between administrations. The 60-day public comment period ends March 7, 2025, with compliance required 180 days after the final rule takes effect.
  • Healthcare data breaches affected 184,111,469 records in 2024, representing 53% of the U.S. population, with 703 large breaches reported to OCR. The largest breach occurred at Change Healthcare, affecting 100 million individuals through a ransomware attack that caused widespread disruption to healthcare services and medication access across the U.S. healthcare system. The year saw 13 breaches involving more than 1 million healthcare records each, with 11 caused by hacking incidents and 8 involving business associates of HIPAA-covered entities. In response to these breaches, the HHS Office for Civil Rights published cybersecurity performance goals and proposed updates to the HIPAA Security Rule to mandate stronger security measures, including multifactor authentication and encryption requirements. The fate of these proposed security updates now rests with the incoming Trump administration.
  • SOC 2 audits provide healthcare organizations with a framework for managing data security, privacy, and operational integrity. The audit process ensures protection of Protected Health Information (PHI) and Personally Identifiable Information (PII) through controls that safeguard against unauthorized access and breaches. While not legally mandated, SOC 2 complements HIPAA, HITECH, and GDPR regulations by addressing data encryption, access control, and risk management. The framework includes five trust service principles – Security, Availability, Processing Integrity, Confidentiality, and Privacy – and helps organizations manage third-party vendor risks through certification requirements. Healthcare providers can prepare for SOC 2 audits through gap analysis, control implementation, staff training, and partnership with expert consultants.

Innovation

  • OpenAI CEO Sam Altman claims his company knows how to build AGI and predicts AI agents will join the workforce in 2025. OpenAI defines AGI as systems that outperform humans at economic tasks, with a specific financial threshold of $100 billion in profits set in their agreement with Microsoft. The technology rights for AGI are excluded from OpenAI’s IP investment contracts with companies like Microsoft, marking its strategic importance. Critics, including Gary Marcus, have dismissed Altman’s claims as marketing hype. Altman acknowledges the potential economic disruption from AGI and suggests universal basic income as a solution for workforce displacement.
  • A New York University study published in Nature Medicine reveals that introducing just 0.001% of medical misinformation into LLM training data can compromise the model’s accuracy, resulting in over 7% harmful responses. The researchers tested this by injecting false information into “The Pile” database across 60 medical topics, finding that the poisoned models not only produced misinformation about targeted topics but became generally unreliable about medicine. The study demonstrates that for $100, someone could generate 40,000 articles to poison a large model like LLaMA 2, with the misinformation potentially hidden in invisible webpage text. While the researchers developed an algorithm to flag potentially false medical information, the study highlights ongoing challenges with both intentional poisoning and existing medical misinformation in training data, including outdated information in curated databases like PubMed.

Legislation

  • The Texas Legislature is considering the Texas Responsible AI Governance Act, which aims to regulate high-risk AI systems that make consequential decisions affecting areas like healthcare, housing, and employment. The Act establishes strict requirements for developers and deployers, including mandatory risk assessments, consumer disclosures, and human oversight of AI decisions. The legislation prohibits specific AI uses such as social scoring, unauthorized biometric data collection, and emotional inference without consent, while giving consumers rights to transparency and legal action. The Texas Attorney General would have enforcement authority with fines up to $100,000 per violation, and businesses operating in Texas would need to ensure compliance through impact assessments and updated procedures.
  • California has enacted a law prohibiting insurance companies from using AI alone to deny health insurance claims. The legislation, Senate Bill 1120 (Physicians Make Decisions Act), was signed by Governor Gavin Newsom in September 2024 in response to data showing 26% of California insurance claims were denied in 2024. The law requires human judgment in coverage decisions, sets strict deadlines for claim reviews (5 business days for standard cases, 72 hours for urgent cases, and 30 days for retrospective reviews), and gives the California Department of Managed Health Care enforcement authority with power to issue fines. The initiative has gained national attention, with 19 states considering similar legislation and congressional offices exploring federal regulations.

Regulation

  • The FDA has released new draft guidance for AI-enabled medical devices, building on its previous predetermined change control plan guidance from December 2023. The guidance, to be published in the Federal Register on January 7, provides recommendations for the total product lifecycle of AI-enabled devices, including design, development, maintenance, and documentation requirements. The FDA has authorized over 1,000 AI-enabled devices and will accept public comments on the draft guidelines through April 7, with specific focus on AI lifecycle alignment, generative AI recommendations, performance monitoring, and user information requirements. The agency will host webinars on February 18 to discuss the regulatory proposal and on January 14 regarding the final PCCPs guidance, while emphasizing the importance of addressing transparency and bias in AI medical devices. The guidance aims to ensure performance considerations across race, ethnicity, disease severity, gender, age, and geographical factors are addressed throughout device development and monitoring.