Navigating the New Era of Algorithmic Transparency in Healthcare – MedCity News - Latest Global News

Navigating the New Era of Algorithmic Transparency in Healthcare – MedCity News

The Health Information, Technology, and Interoperability (HTI-1) final rule recently released by the Office of the National Coordinator for Health IT (ONC) established groundbreaking transparency requirements for artificial intelligence (AI) and predictive algorithms used in certified health IT systems become .

With ONC-certified health IT supporting the care of more than 96% of hospitals and 78% of office-based physicians, this regulatory approach will have far-reaching impacts on the healthcare industry.

As EHR/EMR vendors attempt to comply with these new regulations, they must navigate uncharted and often confusing terrain and address the challenges presented by the complexity and opacity of powerful AI tools, including Large Language Models (LLMs).

The potential and challenges of Large Language Models (LLMs)

LLMs are a type of AI that can analyze large amounts of data, such as unstructured clinical notes, to generate insights and recommendations. While LLMs have the potential to revolutionize predictive decision support in healthcare, their inherent complexity and “black box” nature makes it difficult to understand how they reach their conclusions. This opacity creates significant challenges for EHR vendors that rely on these models to meet the transparency requirements of the HTI-1 Final Rule.

Understand the FAVES criteria

The HTI-1 Final Rule introduces the FAVES (fairness, appropriateness, validity, effectiveness, and safety) criteria as a framework for assessing the transparency and accountability of AI and predictive algorithms. EHR/EMR vendors must ensure that clinical users can access a consistent base set of information about the algorithms they use to support decision making. Providers must demonstrate that their systems meet each of these criteria:

  • Fairness: Algorithms must be free of bias and discrimination and ensure equal treatment of all patients.
  • Appropriateness: Algorithms must be appropriate for their intended use cases and respect patient privacy and autonomy.
  • Validity: Algorithms must be based on sound scientific principles and validated using rigorous testing and evaluation methods.
  • Effectiveness: Algorithms must demonstrate real-world effectiveness in improving patient outcomes and clinical decision-making.
  • Security: Algorithms must be safe to use and accompanied by appropriate monitoring, reporting and risk mitigation measures.

Evidence-based vs. predictive decision support

The HTI-1 Final Rule distinguishes between evidence-based decision support tools, such as: B. Diagnostic prompts and out-of-range lab alerts, and predictive decision support systems based on LLMs and other AI algorithms. While evidence-based tools are not the focus of the new regulations, predictive decision support systems are subject to strict transparency requirements, reflecting their greater potential for harm if not properly validated and monitored.

Preparation for the ONC certification criteria

To maintain certification and comply with the HTI-1 Final Rule, EHR/EMR vendors must closely monitor the development of the ONC Certification Criteria, which is expected to be published by the end of the year. Providers should proactively assess their current and planned use of LLMs and other prediction algorithms and ensure they are prepared to provide detailed information about training data, potential biases, and decision-making processes. Failure to comply with these requirements may result in loss of certifications and market share.

The importance of collaboration and transparency

As the healthcare industry navigates this new landscape of algorithmic transparency, collaboration between EHR/EMR vendors, healthcare providers and regulators will be critical. By working together to establish best practices, share knowledge and address potential challenges, the industry can ensure the benefits of AI and LLMs are realized in healthcare while keeping patient safety and trust top of mind. Healthcare providers also play a critical role in providing feedback on the accuracy and usefulness of predictive decision support tools, helping to refine these systems over time.

The HTI-1 Final Rule represents a significant step forward to ensure the responsible and ethical use of AI and predictive algorithms in healthcare. As the industry evolves, EHR/EMR vendors that prioritize transparency, collaboration, and patient-centered innovation will be well prepared for the challenges and opportunities ahead. By leveraging algorithmic transparency and collaborating to establish best practices, the healthcare community can harness the power of AI to improve patient care and outcomes while maintaining trust among patients and providers alike.

Photo: Metamorworks, Getty Images


Dr. Jay Anders is Chief Medical Officer of Medicomp Systems. Dr. Anders supports product development and acts as a representative and mouthpiece for the medical and healthcare community that Medicomp’s products serve. Before joining Medicomp, Dr. Anders served as Chief Medical Officer for McKesson Business Performance Services, where he was responsible for supporting the development of clinical information systems for the company. He was also instrumental in leading the first integration of Medicomp’s Quippe physician documentation into an EHR. Dr. Anders leads Medicomp’s Clinical Advisory Board and works closely with physicians and nurses to ensure that all Medicomp products are developed based on user needs and preferences to improve usability.

Sharing Is Caring:

Leave a Comment