Oversight of health AI applications should be a democratic process


AI development is at a flash point; developers from some of the largest, most successful companies in the world are leaving high-paying jobs to start health tech companies. Providers, regulators, and industry leaders are (understandably) looking for regulatory frameworks to ensure AI applications are trustworthy and patient-centric.

Not all regulatory frameworks will allow AI to be truly revolutionary. At a recent hearing of the House Energy and Commerce Health Subcommittee, congressional leaders expressed concern about a third-party review process for AI proposed by the nonprofit Coalition for Health AI (CHAI), whose leadership is driven by representatives from incumbents like Google, Microsoft, and Mayo Clinic. This review process would put big tech companies, which are themselves developing AI models for health care, in control of market entry.

As Rep. Mariannette Miller-Meeks (R-Iowa) and Rep. Brett Guthrie (R-Ky.) pointed out at the hearing, there is the strong potential for regulatory capture inherent in this approach.

By seeking to regulate the health AI industry on behalf of the federal government, established big tech companies could create an uneven playing field for newer or smaller companies. Startups innovate quickly, while incumbents may struggle to keep pace. Adding additional layers of review of startups’ intellectual property by the incumbents they’re competing against does nothing for safety, while also slowing down real innovation that could change patients’ and clinicians’ lives for the better.

An alternative framework is that of localized quality assurance. An example of this framework has been developed by the Health AI Partnership, a collaboration of decentralization-minded individuals from academic medical centers, that aims to arm health systems and others seeking to incorporate health AI into their operations with the tools and technical expertise to allow them to make their own decisions on which AI models work best for the patients they serve.

Rather than handing control of sensitive data review to big tech companies that are trying to compete in the AI space, localized quality assurance would allow participating organizations the ability to validate health AI applications on their own datasets, determining which technologies are most effective for the populations they serve. This framework would provide resources to allow every provider to operate its own review process, rather than consolidating these reviews with a handful of big tech companies and academic medical centers.

Democratized AI access and localized assurance is gaining ground. In addition to Health AI Partnership’s tools, Epic, an electronic health record company, recently launched an open source tool, available on Github for providers to validate an AI model. A world with multiple voluntary assurance tools is one that allows the most innovative companies to compete on the merits of their products, and enables clinicians to evaluate these tools based on efficacy and utility in real-world clinical settings.

Full democratization of AI review, of course, demands more support for practices big and small. Not every clinic has the bandwidth or expertise to adopt internal AI review tools, making it more difficult for them to evaluate AI models for their specific patient population and clinical needs. Without additional support, full democratization of AI access may be delayed.

An open, competitive market is essential for developing technology. The winners of AI in health care haven’t yet been determined, and big corporations are trying to put a finger on the scale. In the ongoing battle between big tech and little tech, calcifying the market with regulatory capture that favors incumbents and drives the smartest people, their ideas, and their sources of capital away from health care isn’t the right way to proceed.

AI has the potential to improve care for millions of Americans, lessen the burden on clinicians, and mitigate disparities in health care. But innovators need to test and validate their novel ideas at a rapid clip. Big corporations have a strong incentive to seize the market under the guise of safety. It is essential that this be prevented from happening, allowing health systems and providers to gauge solutions for their own practices and unique patient populations, and incentivizing innovation and investment along the way.

Julie Yoo is a general partner on the Bio + Health team at venture capital firm Andreessen Horowitz.





Source link

Previous articleHow the new iPad Pro is changing everything for the MacBook
Next articleApple releases macOS 14.6 beta