Skip to Main Content

Federal health technology regulators on Wednesday finalized new rules to force software vendors to disclose how artificial intelligence tools are trained, developed, and tested — a move to protect patients against biased and harmful decisions about their care.

The rules are aimed at placing guardrails around a new generation of AI models gaining rapid adoption in hospitals and clinics around the country. These tools are meant to help predict health risks and emergent medical problems, but little is publicly known about their effectiveness, reliability, or fairness.

advertisement

Starting in 2025, electronic health record vendors who develop or supply these tools, which increasingly use a type of AI known as machine learning, will be required to disclose more technical information to clinical users about their performance and testing, as well as the steps taken to manage potential risks.

STAT+ Exclusive Story

STAT+

This article is exclusive to STAT+ subscribers

Unlock this article — and get additional analysis of the technologies disrupting health care — by subscribing to STAT+.

Already have an account? Log in

Already have an account? Log in

Monthly

$39

Totals $468 per year

$39/month Get Started

Totals $468 per year

Starter

$30

for 3 months, then $39/month

$30 for 3 months Get Started

Then $39/month

Annual

$399

Save 15%

$399/year Get Started

Save 15%

11+ Users

Custom

Savings start at 25%!

Request A Quote Request A Quote

Savings start at 25%!

2-10 Users

$300

Annually per user

$300/year Get Started

$300 Annually per user

View All Plans

Get unlimited access to award-winning journalism and exclusive events.

Subscribe

STAT encourages you to share your voice. We welcome your commentary, criticism, and expertise on our subscriber-only platform, STAT+ Connect

To submit a correction request, please visit our Contact Us page.