E-News Exclusive |
Since taking office, President Biden has moved with urgency to seize the tremendous promise and manage the risks posed by artificial intelligence (AI). President Biden’s Executive Order 14110 outlined dozens of actions, including many that Health and Human Services (HHS) is responsible for, to ensure the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”
The Biden-Harris Administration is announcing the latest part of a broader commitment to ensure AI is deployed safely and responsibly in health care—voluntary commitments from a group of 28 health care provider and payer organizations to help move toward safe, secure, and trustworthy purchasing and use of AI technology.
These companies are committing to doing the following :
• vigorously developing AI solutions to optimize health care delivery and payment by advancing health equity, expanding access, making health care more affordable, improving outcomes through more coordinated care, improving patient experience, and reducing clinician burnout;
• working with their peers and partners to ensure outcomes are aligned with fair, appropriate, valid, effective, and safe (FAVES) AI principles, as established and referenced in HHS’ Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1) rule;
• deploying trust mechanisms that inform users if content is largely AI-generated and not reviewed or edited by a human;
• adhering to a risk management framework that includes comprehensive tracking of applications powered by frontier models and an accounting for potential harms and steps to mitigate them; and
• researching, investigating, and developing AI swiftly but responsibly.
Previous US government regulatory approaches to AI, as well as prior interaction with the private sector, including the announcement of voluntary commitments by private companies, had mostly focused on responsible AI through technology developers on the “supply side” of the equation. This included the use of AI in foundation models, medical devices, software applications, and EHRs.
The commitments are from entities on the “demand side,” specifically, health care providers and payers who develop, purchase, and implement AI-enabled technology for their own use in health care activities. Such voluntary commitments are not dispositive of compliance under federal law.
The companies that committed include Allina Health, Bassett Healthcare Network, Boston Children’s Hospital, Curai Health, CVS Health, Devoted Health, Duke Health, Emory Healthcare, Endeavor Health, Fairview Health Systems, Geisinger, Hackensack Meridian, HealthFirst (Florida), Houston Methodist, John Muir Health, Keck Medicine, Main Line Health, Mass General Brigham, Medical University of South Carolina Health, Oscar, OSF HealthCare, Premera Blue Cross, Rush University System for Health, Sanford Health, Tufts Medicine, UC San Diego Health, UC Davis Health, and WellSpan Health.
President Biden has previously secured commitments from companies to help advance AI-related goals. In July 2023, 15 companies responsible for many of the most cutting-edge AI models, committed to a series of actions designed to promote safety, security, and trust—three fundamental principles to the future of AI.
Role of HHS
The Administration’s lead agency on health care is HHS. The department has worked for many years to advance its mission with the help of AI, using AI to advance research and discovery, drug and device safety, health care delivery, human services delivery, and public health. HHS also plays a variety of critical roles, serving as a regulator of the health industry, catalyst for innovation in the delivery of health and human services, investor of grant and research funding, and convener of common interests and priorities for ensuring the health and wellbeing for all Americans.
Specific examples of HHS AI-related activities to date include the following:
• The Office of the National Coordinator for Health Information Technology recently finalized a rule to increase algorithm transparency to support a dynamic and high-quality market for predictive AI in EHRs used by 97% of hospitals and almost 80% of physician offices across the country.
• The FDA has cleared, authorized, or approved more than 690 AI-enabled devices to improve medical diagnosis and treatment and expand access to care for patients. Medical imaging represents the category with most AI/machine learning (ML)–enabled device submissions. The FDA is also exploring how AI will impact the regulatory review process of drugs, biological products, and medical devices (including software as a medical device). They recently issued a discussion paper on using AI/ML in the development of drug and biological products and are seeking comment on draft guidance and hosting workshops to solicit feedback.
• Recently, FDA released a draft guidance on marketing submission recommendations for predetermined change controls plans for AI/ML-enabled medical devices to help ensure that such devices can be safely, effectively, and rapidly modified, updated, and improved in response to new data. In October 2023, FDA, with Health Canada, and the UK’s Medicines and Healthcare products Regulatory Agency jointly published “Predetermined Change Controls Plans for Machine Learning-Enabled Medical Devices: Guiding Principles.”
• The National Institutes of Health (NIH) are utilizing AI to research priority areas including cancer, Alzheimer’s disease, and mental illness. NIH invested an estimated $200 million in fiscal year 2023 and $175 million in fiscal year 2022 in the use of large data sets on topics such as helping researchers understand how to diagnose individuals with autism-spectrum disorders. NIH also issued a Notice of Funding Opportunities to develop ML/AI tools and resources to support the NIH BRAIN Initiative and issued a Notice of Special Interest to help improve the usability of NIH-supported data for AI/ML analytics.
• The AI/ML field currently lacks diversity in its researchers and in data, including EHR data. These gaps pose a risk of creating and continuing harmful biases in AI/ML use, algorithm development and training, and the interpretation of findings. To address this gap and engage underrepresented communities, NIH recently announced the Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity (AIM-AHEAD) program that will establish mutually beneficial and coordinated partnerships to increase the participation and representation of researchers and communities currently underrepresented in the development of AI/ML models and enhance the capabilities of this emerging technology, beginning with EHR data.
• The Office for Civil Rights proposed a rule to make clear federal civil rights laws under Section 1557 of the Affordable Care Act prevent discrimination in health programs and activities, including in the use of clinical algorithms.
• The Agency for Healthcare Research and Quality (AHRQ) developed and published a conceptual framework to apply guiding principles across an algorithm’s life cycle to address the problems of structural racism and discrimination, centering on health care equity for patients and communities as the overreaching goal. The guiding principles were developed following a two-day meeting in March 2023 of a diverse panel of experts convened by AHRQ and the National Institute for Minority Health and Health Disparities at NIH in partnership with the HHS Office of Minority Health and the Office of the National Coordinator for Health Information Technology, to review evidence, hear from stakeholders, and receive community feedback. The meeting was informed by an evidence review from the AHRQ Evidence-based Practice Center Program, which examined the evidence on algorithms and racial and ethnic bias in health care and approaches to mitigate such bias. A subsequent meeting convened by AHRQ and NIH in May 2023 allowed stakeholders and the public to provide feedback on a draft of the guiding principles.
• CMS is exploring whether algorithms used by health plans and providers to identify high-risk patients and manage costs can introduce inappropriate bias and restrictions in the delivery of medically appropriate health care services. Prior authorization policies and procedures may have a disproportionate impact on underserved populations and may delay or deny access to certain services. CMS is now requiring Medicare Advantage organizations to ensure that they are making medical necessity determinations based on the circumstances of the specific individual, as opposed to using an algorithm or software that doesn't account for an individual's circumstances.
• The CDC is developing an AI strategy and exploring how AI and natural language processing methods can augment existing methods to improve the timeliness and enhance public health’s ability to estimate US suicide fatalities and other important sentinel events. The CDC also uses AI for combatting the opioid epidemic (eg, identifying opioid users and deaths), response to disease outbreaks, and more.
• The Administration for Strategic Preparedness and Response currently leverages ML and AI tools to improve COVID-19 data collection and analysis, forecasting, and vaccine access and distribution.
• The Administration for Children and Families and the Assistant Secretary for Planning and Evaluation conducted a study focused on emerging issues and needs associated with AI in the health and human services sectors, resulting in the published report, “Options and Opportunities to Address and Mitigate the Existing and Potential Risks, as well as Promote Benefits, Associated with AI and Other Advanced Analytic Methods.”
AI opens vast opportunities to improve our country’s health care, public health, and social service capabilities to better serve the American people. The Biden-Harris Administration commends these organizations for committing themselves to deploying these critical technologies to best serve the American people.
— Source: Health and Human Services