Categories
Ask the Health Lawyer

Epic Releases Open-Source AI Validation Tool for Health Systems

Summary of article from Fierce Healthcare, by Heather Landi:

Epic has launched an open-source tool to help healthcare organizations test and monitor artificial intelligence (AI) models. Available for free on GitHub, the AI validation software suite can be integrated with electronic health record (EHR) systems and used to validate AI models from various sources. The tool automates data collection and mapping, providing near real-time metrics and analysis. However, it currently does not validate generative AI models, although Epic plans to expand its capabilities in the future. The Health AI Partnership (HAIP), which includes Duke Health, Mayo Clinic, and Kaiser Permanente, intends to use the tool for local AI model validation.

Categories
Ask the Health Lawyer

The Colorado AI Act: What You Need to Know

Summary of article from IAPP, by Cobun Zweifel-Keegan:

The Colorado AI Act, the first U.S. cross-sector AI governance law, was signed into law on May 17, 2024, with key provisions effective from Feb. 2026. The law focuses on high-risk AI systems, defined as those making consequential decisions, and introduces stringent requirements to prevent algorithmic discrimination. The Act imposes responsibilities on both developers and deployers of AI systems, requiring them to use reasonable care to avoid algorithmic discrimination and mandating comprehensive documentation and impact assessments. The law also requires incident reporting, public disclosure of risk management, and direct consumer notifications. The law exempts entities covered by HIPAA if they provide AI-generated recommendations that require a health care provider to take action to implement that recommendation. Enforcement of the law, which treats violations as breaches of Colorado’s general consumer protection statute, will be carried out by the Colorado attorney general starting 1 Feb. 2026.

Categories
Health Law Highlights

New Practical Guidance for Balancing Fairness, Privacy

Summary of article from IAPP, by Cobun Zweifel-Keegan:

The tension between achieving fairness and maintaining privacy in the operation of advanced AI and machine learning systems is a major challenge for digital governance teams. To test for bias and ensure equity, demographic data is often needed, potentially infringing on privacy rights. A report by the Center for Democracy and Technology AI Governance Lab offers best practices for navigating this issue, such as gathering data responsibly, pseudonymization, encryption, and conducting privacy impact assessments. Legislation, like the upcoming Colorado bill, may balance these issues by requiring fairness and bias testing in AI systems. Transparency and clear communication of methodologies are essential to build trust and uniform benchmarks in AI governance.

Categories
Health Law Highlights

Implementing AI and Mitigating Compliance Risks – Part II

Summary of article from Dentons, by Susan Freed:

With the increasing role of generative AI in the healthcare industry, there is a growing need for a clear, consistent approach to its implementation. To mitigate compliance risks, organization must have an AI strategy, identify current uses of generative AI, update relevant policies, and create a process for evaluating new AI technology. It is important to training users, implement regular reporting strategies, and conduct periodic reviews of the AI technology in use. Providers should develop governance processes now and be flexible to enough to adapt to new technologies and regulations.

Categories
Alert

Profound Medical Wins FDA Nod for AI in Prostate Cancer Procedure

Summary of article from MassDevice, by Sean Whooley:

Profound Medical has received FDA 510(k) clearance for its second AI model, the Contouring Assistant, designed to treat prostate cancer. The Contouring Assistant is part of the company’s TULSA-Pro system, which uses transurethral ultrasound ablation (TULSA) to ablate diseased tissue in patients with various stages of prostate cancer, benign prostatic hyperplasia (BPH), or those requiring salvage therapy. The TULSA procedure uses real-time magnetic resonance guidance to preserve urinary continence and sexual function while targeting cancerous tissue. The newly cleared AI module uses machine learning to segment the prostate, aiding in the delineation of the target ablation volume. Profound Medical is also developing another TULSA-AI module, TULSA BPH, with more details expected later in 2024.

Categories
Health Law Highlights

Health Care, AI and Antitrust: Analysis and Next Steps

Summary of article from Manatt, Phelps & Phillips, LLP, by Dylan Carson, Harvey Rochman:

As artificial intelligence (AI) becomes more prevalent in the health care industry, there are growing concerns about potential anticompetitive conduct, including algorithmic price fixing. This issue was highlighted in a recent New York Times report alleging that certain health plans and administrators were using the same company’s algorithmic tools to set out-of-network rates, potentially leading to higher costs for patients. Antitrust enforcers argue that using the same AI systems to set prices could be seen as collusion and therefore a violation of antitrust laws. Health care companies are advised to closely monitor these developments and consider the potential legal risks associated with their use of AI.

Categories
Health Law Highlights

Envisioning the Future of Health Care With OpenAI’s GPT-4o

Summary of article from KevinMD, by Harvey Castro, MD, MBA:

OpenAI’s GPT-4o promises to revolutionize health care with advanced predictive analytics, enhanced surgical assistance, personalized medicine, automated health monitoring, and virtual health assistants. It aims to improve emergency responses, offer immersive education and training, facilitate cross-border medical collaboration, enhance mental health services, streamline administrative processes, and foster community health initiatives. The system is built to eliminate dataset bias and ensure data security with HIPAA-compliant servers. GPT-4o’s potential applications range from predicting health trends to automating administrative tasks, all while ensuring patient data remains private and secure. This transformative AI technology is poised to improve patient outcomes, enhance operational efficiency, and foster a more equitable and advanced health care system.

Categories
Health Law Highlights

The Future of Technology in Health Care

Summary of article from The Regulatory Review, by Alyson Diaz, Julia Englebert, and Carson Turner:

The use of technology in healthcare, particularly AI and telemedicine, is increasing, but many Americans are uncomfortable with AI’s role in diagnosis and treatment due to potential biases and errors. While AI can improve care quality and accessibility, especially for underserved communities, it also presents risks such as algorithmic bias and overreliance. Current regulations, including the FDA’s 510(k) review, inadequately address these concerns and often allow AI-enabled devices to be approved without sufficient safety and accuracy checks. Scholars suggest various regulatory improvements, including educating patients about algorithmic bias, continuous assessment of AI technologies, and lowering barriers for community organizations to provide telehealth services. Concerns also extend to the influence of direct-to-consumer pharmaceutical companies on social media, and the need for stricter regulation of telehealth providers to prevent inadequate treatment and excessive drug prescriptions.

Categories
Health Law Highlights

The Intersection of Artificial Intelligence and Utilization Review

Summary of article from Sheppard Mullin Richter & Hampton LLP, by Lynsey Mitchel:

California’s SB 1120 bill aims to regulate the use of artificial intelligence (AI) in managed care, requiring AI tools to be fair, non-discriminatory, and based on a patient’s medical history and individual circumstances. The bill aligns with the Centers for Medicare and Medicaid Services (CMS) rules, which allow the use of AI in coverage determinations as long as the AI complies with all applicable rules and does not solely dictate decisions. Other states like Georgia, New York, Oklahoma, and Pennsylvania have similar bills, focusing on regulator review and disclosure of AI use. Various states have also adopted the National Association of Insurance Commissioners’ guidelines to mitigate the risk of adverse outcomes from AI use. Payors are urged to monitor their AI tools closely to reduce the risk of legal issues arising from improper service denials.

Categories
Health Law Highlights

Cloud-Based AI Services Could Help Fight Health Misinformation

Summary of article from Healthcare IT News, by Andrea Fox:

Several major universities are developing a platform named Project Heal to combat healthcare and public health misinformation. The platform will use machine learning, generative AI, and predictive analytics to identify and counteract misinformation before it spreads. The system also accounts for cultural, historical, and linguistic nuances to generate personalized messages for targeted communities.