AI in Healthcare: Balancing Innovation with Safety Under New Regulations
The new EU AI Act will increase regulation of artificial intelligence in healthcare, presenting organizations with the complex challenge of balancing innovation with regulatory requirements.
The healthcare industry stands at a critical juncture as artificial intelligence transforms medical procedures and training while facing new regulatory requirements. With the recent approval of the EU AI Act and existing Medical Device Regulation (MDR), healthcare providers and medical device manufacturers must navigate an increasingly complex regulatory landscape while maintaining innovation and ensuring patient safety. At MedtecLIVE 2024, industry experts highlighted specific requirements as well as possible solutions for the regulation of AI in various contexts in a presentation session organized by bitkom.
The New Era of AI in Healthcare: Navigating Regulatory Waters
The European Union has established the world's first comprehensive AI regulation through the AI Act, which will be implemented alongside existing medical device regulations. This new framework introduces specific requirements for high-risk AI systems, which include most medical applications. According to the regulatory experts, medical devices in classes 2a, 2b, and 3 will automatically be classified as high-risk AI systems, requiring additional oversight and compliance measures.
Healthcare organizations must prepare for a phased implementation of these regulations. The AI Act's requirements will roll out gradually, with different timelines for various aspects: prohibitions on unacceptable risk systems take effect after six months, requirements for general-purpose AI models after twelve months, and high-risk system requirements between 24 to 36 months after the act's implementation. (Editor's note: the first stage has been in force since 01.02.2025)
The intersection of the AI Act with existing MDR creates both challenges and opportunities. While there is some overlap in areas such as risk management, quality management systems, and post-market monitoring, the AI Act introduces new specific requirements for transparency, data governance, and documentation that weren't previously addressed in medical device regulations.
From Theory to Practice: Real-world AI Applications
One of the most compelling examples of AI implementation in healthcare is the Femto cataract surgery system. With over 19 million operations worldwide, cataract surgery is the most common surgical procedure, and AI-assisted systems are improving its precision and safety. The Femto system uses AI to precisely determine incision positions, demonstrating how artificial intelligence can enhance surgical accuracy while maintaining physician oversight.
Extended reality and AI are revolutionizing medical training through innovative approaches to simulation and practice. Companies like medverse GmbH, part of the inside360 group, are developing immersive training environments that combine virtual reality with AI-powered interactions. These systems enable healthcare professionals to practice procedures and scenarios that would be impossible or unsafe to recreate in real life. As demonstrated in the conference, these VR-based solutions allow practitioners to simulate emergency situations or complications without risk to patients - for example, testing responses to cardiac events during procedures while maintaining the ability to reset and repeat the scenario multiple times for optimal learning.
The integration of AI in medical training directly addresses critical scalability challenges in medical education. As highlighted in the presentation, traditional individual training sessions are extraordinarily expensive and offer limited customization options. Virtual training environments solve this by allowing medical professionals to practice procedures whenever and wherever needed, without geographic constraints - a trainer could be in Boston, while learners participate from Hong Kong and South Africa simultaneously. This approach has demonstrated remarkable efficiency gains, particularly in specialized procedures like endovascular operations. Traditional training sessions using physical equipment (costing around 30,000 euros per device) typically take four hours, while virtual training can compress this into 15-45 minutes of focused practice while maintaining educational quality.
Safety First: Building Risk-Aware AI Systems
Risk assessment must begin at the earliest stages of AI system development. Healthcare organizations are required to implement structured risk evaluation processes that consider both technical performance and potential impacts on patient safety. This includes evaluating factors such as model accuracy, fairness across different patient populations, and potential failure modes.
Development teams should include diverse expertise, including clinical, technical, and regulatory perspectives. As highlighted in the presentations, successful risk management requires input from multiple stakeholders to identify and address potential issues throughout the development lifecycle.
Documentation plays a crucial role in risk management. Organizations must maintain detailed records of their risk assessment processes, mitigation strategies, and ongoing monitoring plans. This documentation serves both regulatory compliance and provides a foundation for continuous improvement.
Beyond Implementation: Ensuring Long-term AI Performance
Continuous monitoring of AI systems in healthcare is essential, as model performance can degrade significantly over time. As demonstrated by Konstanze Olschewski from Phnx Alpha GmbH, real-world cases have shown dramatic performance deterioration - in one industrial application, model accuracy declined to mere chance levels (50% accuracy) within just 16 weeks of deployment. This degradation is particularly concerning in medical applications, where regulations mandate that performance cannot decrease after deployment. The challenge is compounded by the fact that most AI models are trained on static datasets up to a specific point in time, and in medical device applications, regulatory restrictions may actually prohibit system updates, making initial quality assurance and ongoing monitoring crucial for maintaining performance standards.
Healthcare organizations must implement quality assurance measures that track key performance metrics throughout the system's lifecycle. This includes monitoring accuracy, fairness, and other relevant indicators specific to the medical application.
Tools and frameworks are emerging to support this ongoing monitoring requirement. Solutions like AI-Guard enable organizations to track model performance in real-time, providing insights into system behavior and helping identify potential issues before they impact patient care.
Conclusion
As healthcare organizations integrate AI into their operations, success depends on balancing innovation with regulatory compliance and patient safety. By implementing comprehensive risk management strategies and continuous monitoring systems, organizations can maintain high standards of care while leveraging the benefits of AI technology. The key is to approach AI implementation as an ongoing process rather than a one-time deployment, ensuring sustained performance and safety throughout the system's lifecycle.
Editorial note:
This article is based on the corresponding presentation during MedtecLIVE 2024 (18 to 20 June 2024) and was created with the support of AI. The supporting programme of MedtecLIVE 2026, which will take place from 5 to 7 May 2026 in Stuttgart, also offers numerous lectures. The trade fair brings together suppliers, providers from the development and production of medical technology, OEMs, distributors and other players in the medical technology community.