Categories
Health Law Highlights

Wade’s Healthcare Privacy Advisor for January 8, 2025

AI Legislation

  • The Texas Legislature is considering the Texas Responsible AI Governance Act, which would establish regulations for high-risk AI systems that make consequential decisions affecting areas like employment, education, and government services. The Act requires developers and deployers to protect consumers from algorithmic discrimination, maintain oversight of AI systems, and provide detailed disclosures about AI interactions. The legislation prohibits specific AI uses including social scoring, unauthorized biometric data collection, and emotional inference without consent, while granting consumers rights to transparency and legal action. The Texas Attorney General would have enforcement authority with fines up to $100,000 per violation, making this one of the most comprehensive state-level AI regulations proposed in the U.S.

AI Implementation

  • Healthcare entities face increasing scrutiny over AI usage in patient data management, with three key areas of concern emerging: data scraping/sharing, utilization management, and discriminatory bias. Recent court cases have highlighted the importance of data anonymization in determining the validity of privacy claims, with courts generally favoring defendants when patient data is properly anonymized. Federal agencies and states are implementing regulations to limit AI’s role in medical necessity determinations, with CMS prohibiting AI-only decisions and states like California passing laws requiring specific disclosures for GenAI use in patient communications. While major litigation regarding AI discrimination hasn’t occurred yet, state attorneys general are actively investigating potential racial bias in healthcare algorithms. To mitigate risks, healthcare entities should conduct regular AI risk assessments, implement robust PHI de-identification procedures, and utilize appropriate data agreements and patient waivers.
  • Testing by medical professionals has shown AI systems like ChatGPT giving dangerous medical advice up to 20% of the time. While AI tools are being used by some healthcare providers for tasks like transcription and note-taking, even these applications have shown problems with hallucinated content and bias, such as OpenAI’s Whisper inserting false information into patient records. Medical experts warn that while AI technology shows promise, its current state risks introducing dangerous “AI slop” into patient care, requiring thorough verification that may ultimately negate any time-saving benefits.
  • Agentic AI is a new paradigm that makes independent decisions and takes actions without human intervention . The technology shows potential applications in healthcare through patient monitoring, manufacturing through production optimization, and transportation through autonomous vehicles. Major concerns include job displacement, data privacy, control issues, and safety risks in high-stakes environments.
  • A bipartisan U.S. House task force released a report on December 17 outlining AI policy recommendations for healthcare. The report identifies AI’s potential to improve healthcare efficiency through data analysis and automation while highlighting interoperability challenges between systems. It raises concerns about patient data privacy, cybersecurity risks, and the need for healthcare workforce AI training. The report also addresses unresolved issues regarding liability rules for AI-related medical errors and unclear reimbursement policies for AI implementation in healthcare systems. The task force emphasizes that payment structures and accountability frameworks for healthcare AI remain undefined, requiring further development.

Cybersecurity

  • The U.S. Department of Health and Human Services Office for Civil Rights (OCR) has proposed the first update to the HIPAA Security Rule since 2013, requiring healthcare organizations to implement stronger cybersecurity measures for protected health information. The new requirements include written risk assessments, network segmentation, vulnerability scanning every six months, and penetration testing every 12 months. From 2018 to 2023, healthcare data breaches increased by 102%, affecting 167 million individuals in 2023 alone. The proposed changes address the evolution of healthcare delivery, increased cyber threats, and compliance issues observed by OCR. The current Security Rule remains in effect while HHS proceeds with the rulemaking process.
  • The U.S. Department of Health and Human Services’ Office for Civil Rights has proposed a major update to HIPAA’s Security Rule, introducing new cybersecurity requirements with an estimated first-year compliance cost of $9 billion. The proposal includes mandatory implementation specifications for encryption, multifactor authentication, data backups every 48 hours, and requirements for business associates to verify compliance through expert analysis. Organizations will have 240 days to comply after the final rule is published, with the Notice of Proposed Rulemaking set for January 6, 2024, followed by a 60-day comment period. The proposal has bipartisan support and aims to modernize healthcare cybersecurity standards that haven’t been updated since 2013, though its fate may be influenced by the upcoming administration change.

Data Breaches

  • The healthcare sector faced unprecedented cyberattacks in 2024, with 677 major health data breaches affecting 182.4 million people, including a record-breaking attack on Change Healthcare that compromised 100 million Americans and resulted in a $22 million ransom payment. Business associates were involved in 212 breaches affecting 131 million individuals, while hacking/IT incidents accounted for 550 attacks impacting 166 million people. The top 10 breaches included major healthcare organizations like Kaiser Foundation Health Plan (13.4 million affected), Ascension Health (5.6 million affected), and HealthEquity (4.3 million affected). Looking ahead to 2025, experts predict continued threats from ransomware, data theft, and supply chain attacks, with emerging concerns around telehealth security, IoT medical devices, and AI in healthcare.
  • UT Southwestern Medical Center experienced a data breach in late-2024 that exposed 43,048 patients’ data through unauthorized access to a third-party calendar tool, marking their sixth breach since 2020. The exposed data included sensitive information such as names, dates of birth, Social Security numbers, medical records, diagnoses, and insurance information. UTSW’s breach occurred due to improper use of a calendar management tool without a business associate agreement. UTSW has taken remedial action, including implementing stronger security measures and notifying affected individuals.
  • A significant cyberattack on Texas Tech University Health Sciences Centre (TTUHSC) and its El Paso campus compromised sensitive data of approximately 1.4 million individuals, with the Interlock ransomware group claiming responsibility for stealing 2.1 million files totaling 2.6 terabytes. The breached data included personal information such as names, Social Security numbers, financial details, and health-related records, prompting TTUHSC to offer complimentary credit monitoring services and establish a toll-free assistance line for affected individuals. The incident follows a pattern of major healthcare sector cyberattacks in 2024, including the Change Healthcare breach affecting 100 million individuals ($22 million ransom), MediSecure in Australia, and Synnovis in London’s NHS hospitals. TTUHSC discovered the breach in mid-September, reported it to authorities, and is implementing enhanced security measures while working with cybersecurity specialists.

Litigation

  • Apple has agreed to pay $95 million to settle a lawsuit alleging that Siri recorded private conversations without consent. The settlement addresses “unintentional” recordings that occurred after the “Hey, Siri” feature was introduced in 2014, with users reporting suspiciously targeted ads following private conversations. Affected customers who purchased Siri-enabled devices between September 17, 2014, and December 31, 2024, can claim up to $20 per device for a maximum of five devices, with eligible devices including iPhones, iPads, Apple Watches, MacBooks, HomePods, iPod touches, and Apple TVs. A settlement approval hearing is scheduled for February 14, after which Apple will notify affected customers and delete their recorded private conversations. While the $95 million settlement appears significant, it’s notably less than the potential $1.5 billion fine Apple could have faced under the Wiretap Act if the case had proceeded to trial.
  • The Texas Attorney General has begun enforcing the Texas Data Privacy and Security Act, which took effect on July 1, 2024. The Act grants consumers rights to access, correct, delete, and obtain copies of their personal data, while requiring businesses to implement security measures and limit data collection. The Attorney General has issued violation notices targeting inappropriate data sharing, lack of consumer consent, and deficiencies in privacy notices. The enforcement actions focus on cases where sensitive user data, including location and vehicle information, was shared without proper consent. Businesses operating in Texas must now ensure compliance with the Act’s requirements regarding data collection, processing, and consumer rights notifications.

Medical Reasoning

  • A recent research paper evaluates the performance of OpenAI’s o1-preview model, a large language model, on clinical reasoning tasks. The study conducted five experiments focusing on differential diagnosis generation, diagnostic reasoning, triage differential diagnosis, probabilistic reasoning, and management reasoning, with assessments by physician experts. The o1-preview model demonstrated significant improvements in generating differential diagnoses and in the quality of diagnostic and management reasoning compared to previous models and human physicians. However, there were no improvements in probabilistic reasoning or triage differential diagnosis compared to past models. In a battery of tests, the model correctly diagnosed 78.3% of cases, and it selected the correct next diagnostic test in 87.5% of cases. In other tests, the model outperformed GPT-4 and physicians in clinical reasoning documentation. The study concludes that the o1-preview model exhibits superhuman performance in several medical reasoning tasks, indicating potential for integration into clinical workflows to enhance decision-making and patient care.
  • A new study in European Radiology shows that GPT-4 achieved 94% accuracy in radiological diagnoses, outperforming human radiologists who scored between 73% and 89%. AI in healthcare leverages massive datasets including electronic health records, medical imaging, and clinical databases to enhance diagnostic capabilities, personalize treatment plans, and support clinical decision-making. The technology powers virtual health assistants, performs remote diagnosis through wearable devices, and accelerates drug discovery while reducing development costs. Healthcare facilities are integrating AI for medical imaging analysis and patient outcome prediction, though challenges remain in regulatory compliance, data privacy, legacy system integration, and maintaining human expertise. The implementation of AI in healthcare requires addressing concerns about patient trust, workforce adaptation, and the potential overreliance on technology.

Privacy

  • The U.S. Department of Justice published a proposed rule on October 29, 2024 that would restrict or prohibit data transactions involving sensitive personal data and government-related data between U.S. persons and entities from countries of concern including China, Russia, Iran, North Korea, Cuba, and Venezuela. The rule establishes bulk data thresholds ranging from 100 to 100,000 records and covers six categories of sensitive personal data including personal identifiers, geolocation data, biometric identifiers, genomic data, health data, and financial data. The regulations will impact various sectors including healthcare providers, financial services, insurance companies, and technology firms, requiring them to implement compliance programs and maintain transaction records for 10 years. The rule prohibits all data brokerage transactions and bulk genomic data transfers, while restricting vendor, employment, and investment agreements through cybersecurity requirements established by CISA. The DOJ emphasizes this is a national security measure aimed at preventing countries of concern from accessing data that could enhance their military and intelligence capabilities.

Ransomware

  • A new report reveals that ransomware attacks are costing U.S. healthcare organizations $1.9 million per day in downtime expenses. Since 2018, there have been 654 ransomware attacks on healthcare providers, with 2023 marking a record high of 143 incidents and compromising over 88.7 million patient records in total, of which 26.2 million were breached in 2023 alone. Healthcare organizations experience an average of 17 days of downtime per incident, with the highest disruptions averaging 27 days in 2022, leading to an estimated total loss of $21.9 billion over six years. Cybersecurity experts emphasize the need for preparation, including incident response teams, communication plans, and regular data backups, as hackers increasingly employ double-extortion tactics by both encrypting systems and stealing data.

Regulation

  • A new paper from Paragon Health Institute outlines guidelines for regulating artificial intelligence in healthcare while maintaining innovation and patient safety. The paper emphasizes that AI regulation must be specific to technology types and use contexts, as risks vary significantly between applications like diagnostic tools versus back-office functions. The FDA’s existing framework for medical device approval provides a foundation for AI oversight, with three pathways based on risk levels and the presence of predicate devices. The guidelines recommend preserving existing patient protections under HIPAA and other laws while avoiding duplicate regulations, and stress that AI systems should demonstrate safety and effectiveness comparable to human clinicians when operating autonomously.