Categories
Health Law Highlights

The Future of Technology in Health Care

Summary of article from The Regulatory Review, by Alyson Diaz, Julia Englebert, and Carson Turner:

The use of technology in healthcare, particularly AI and telemedicine, is increasing, but many Americans are uncomfortable with AI’s role in diagnosis and treatment due to potential biases and errors. While AI can improve care quality and accessibility, especially for underserved communities, it also presents risks such as algorithmic bias and overreliance. Current regulations, including the FDA’s 510(k) review, inadequately address these concerns and often allow AI-enabled devices to be approved without sufficient safety and accuracy checks. Scholars suggest various regulatory improvements, including educating patients about algorithmic bias, continuous assessment of AI technologies, and lowering barriers for community organizations to provide telehealth services. Concerns also extend to the influence of direct-to-consumer pharmaceutical companies on social media, and the need for stricter regulation of telehealth providers to prevent inadequate treatment and excessive drug prescriptions.

Categories
Health Law Highlights

The Intersection of Artificial Intelligence and Utilization Review

Summary of article from Sheppard Mullin Richter & Hampton LLP, by Lynsey Mitchel:

California’s SB 1120 bill aims to regulate the use of artificial intelligence (AI) in managed care, requiring AI tools to be fair, non-discriminatory, and based on a patient’s medical history and individual circumstances. The bill aligns with the Centers for Medicare and Medicaid Services (CMS) rules, which allow the use of AI in coverage determinations as long as the AI complies with all applicable rules and does not solely dictate decisions. Other states like Georgia, New York, Oklahoma, and Pennsylvania have similar bills, focusing on regulator review and disclosure of AI use. Various states have also adopted the National Association of Insurance Commissioners’ guidelines to mitigate the risk of adverse outcomes from AI use. Payors are urged to monitor their AI tools closely to reduce the risk of legal issues arising from improper service denials.

Categories
Health Law Highlights

Cloud-Based AI Services Could Help Fight Health Misinformation

Summary of article from Healthcare IT News, by Andrea Fox:

Several major universities are developing a platform named Project Heal to combat healthcare and public health misinformation. The platform will use machine learning, generative AI, and predictive analytics to identify and counteract misinformation before it spreads. The system also accounts for cultural, historical, and linguistic nuances to generate personalized messages for targeted communities.

Categories
Health Law Highlights

Some Nurses Have a Deep Distrust of AI – but Transparency and Training Could Help

Summary of article from Healthcare IT News, by Andrea Fox:

The California Nurses Association (CNA) has protested against the use of artificial intelligence (AI) in healthcare by Kaiser Permanente, citing concerns over patient safety, job displacement, and the devaluation of nursing practice. The CNA demands that workers and unions be involved in the development and deployment of AI in healthcare. Meanwhile, Kaiser Permanente argues that AI can improve patient care, citing a program that reportedly saved approximately 500 patient lives annually. A recent report revealed that many nurses are uncomfortable with AI, with concerns ranging from lack of empathy to data security. The report suggests that for successful AI implementation, healthcare organizations should prioritize transparency, training, communication, and feedback.

Categories
Health Law Highlights

What Parkland Can Teach Other Hospitals About AI in Health Care

Summary of article from Dallas Morning News:

Parkland Health is actively utilizing artificial intelligence (AI) in medical practices, including trauma patient treatment and assisting doctors with paperwork. The technology analyzes patient data and updates survival probabilities in real time, while also transcribing doctors’ notes. Despite potential AI biases and inaccuracies, Parkland mitigates these risks through regular model reviews. The hospital, an early adopter of electronic health records and a member of Duke University’s Health AI partnership corps, has leveraged AI to predict patient needs and manage patient loads. Being a public hospital, Parkland strategically implements thoroughly vetted AI tools or collaborates with the Parkland Center for Clinical Innovation on new technologies.

Categories
Health Law Highlights

How ACOs Can Harness AI’s Transformative Potential

Summary of article from MedCity News, by Theresa Hush:

Artificial intelligence (AI) is revolutionizing various sectors, including healthcare, by improving diagnoses, personalizing medicine, and developing less-invasive procedures. However, its application in accountable care organizations (ACOs) remains limited, mainly to patient-checking bots and robotic assistants, without fully exploring AI’s potential for predictive healthcare and cost reduction. ACOs face challenges in data aggregation and utilization, often relying on retrospective claims data rather than forward-thinking clinical data insights, hindering significant improvements in patient outcomes and cost savings. To leverage AI effectively, ACOs need to aggregate comprehensive data from all provider EHRs, build clinically rich data substrates, and share data with providers. Thus, integrating AI with electronic health records can offer ACOs opportunities to improve patient health outcomes and reduce costs.

Categories
Health Law Highlights

HHS Extends the Antidiscrimination Provisions of the Affordable Care Act to Patient Care Decision Support Tools, Including Algorithms

Summary of article from Epstein Becker Green, by Bradley Merrill Thompson:

The Office of Civil Rights (OCR) has published its final rule on algorithmic discrimination by payers and health care providers. The rule, based on section 1557 of the Affordable Care Act, prohibits discrimination on the basis of race, color, national origin, sex, age, or disability through the use of patient care decision support tools. Covered entities are required to identify and mitigate the risk of discrimination in these tools, with larger, more sophisticated organizations held to a higher compliance standard. The rule applies to both automated and non-automated tools and is set to become effective 300 days after its publication. OCR is also considering additional rulemaking to expand the scope of the regulation.

Categories
Health Law Highlights

Healthcare Industry Sees Increased Investment in Generative AI, LLMs

Summary of article from Health IT Analytics, by Shania Kennedy:

A recent Generative AI in Healthcare Survey reveals that healthcare and life sciences organizations are increasingly investing in generative AI projects, with larger organizations and leadership roles reporting higher adoption rates. The survey found that 35% of respondents are not actively considering generative AI, while 21% are evaluating use cases and 20% are developing these tools. The majority of organizations have significantly increased their generative AI budgets, with a focus on small, task-specific language models. The most common use cases are streamlining clinical workflows and improving patient communication. Despite the increased adoption, accuracy and potential legal and reputational risks are major roadblocks, and many generative AI projects have not been thoroughly tested for bias and explainability.

Categories
Health Law Highlights

HHS Warns Health Care Sector of AI-Driven Phishing, Social Engineering Attacks on IT Help Desks

Summary of article from Carlton Fields, by Michael Bailey, John Clabby:

The Health Sector Cybersecurity Coordination Center (HC3) has issued an alert about advanced cybersecurity threats targeting the healthcare sector, particularly IT help desks. These threats involve the use of publicly available information and AI to impersonate healthcare employees, gaining access to email accounts and diverting payments to threat-controlled accounts. The alert also highlights the rise of “spearphishing voice” or “vishing” attacks, using AI to mimic employee voices. In response, the Department of Health and Human Services (HHS) is planning to expand its cybersecurity regulations and enforcement, including potential increases in penalties for HIPAA violations. To mitigate these threats, organizations are advised to enhance training, review cybersecurity policies, limit social media exposure, improve help desk verification procedures, and reassess multi-factor authentication methods.

Categories
Health Law Highlights

A Regulatory Roadmap to AI and Privacy

Summary of article from IAPP, by Daniel Solove:

There is a complex relationship between AI and privacy. AI-related privacy issues are often extensions of existing digital privacy problems. Privacy law reform must address digital privacy holistically, not just in the context of AI. AI creates implicates privacy concerns in data collection and processing, decision-making, and data analysis. Current privacy laws are inadequate in handling these issues. AI also presents difficulties in oversight, participation, and accountability. Effective reform must include transparency, due process, and stakeholder involvement. Comprehensive overhaul of existing privacy laws needed to effectively regulate AI’s impact on privacy.