Winners Consulting Services Co., Ltd. advises executives in Taiwan's medical and technology sectors: The EU's Artificial Intelligence Act (EU AI Act) presents three distinct governance gaps in high-risk medical scenarios like cardiology—vague definitions of high-risk applications, unclear standards for algorithmic transparency, and undefined post-market surveillance liability. A 2025 academic study, already cited 13 times, serves as an essential risk map for companies establishing an ISO 42001 compliance framework.
Paper Source: Medicine, healthcare and the AI act: gaps, challenges and future implications (Emmanouil P Vardas, M. Marketou, P. Vardas, arXiv, 2025)
Original Link: https://doi.org/10.1093/ehjdh/ztaf041
About the Authors and This Study
This paper was co-authored by three scholars from the fields of European cardiology and medical AI governance. Lead author Emmanouil P. Vardas, a prominent figure in European arrhythmology, has an h-index of 20 and 1,932 total citations. Co-author M. Marketou's research also focuses on the clinical translation of cardiac disease treatments, with a continued interest in the practical application of digital health tools.
Published in 2025, this paper uses cardiology as its primary case study to systematically examine the specific requirements of the EU Artificial Intelligence Act (EU AI Act) for medical AI governance, identifying several key gaps that the regulatory text has yet to resolve. As of this writing, the paper has been cited 13 times, indicating its significant reference value within the medical AI regulatory research community.
Notably, the authors approach the topic from the perspective of clinical practitioners rather than purely legal scholars. This gives their analysis a rare practical applicability, as it explores not just how regulations are written but also the hard-to-quantify compliance challenges that healthcare institutions face in their daily AI deployments.
The EU AI Act's Three Structural Gaps in Medical Scenarios: Core Insights from the Paper
The study's most significant contribution is its systematic identification, using cardiology as a concrete example, of the systemic voids in the EU AI Act's governance of medical AI. These are not technical issues but rather structural contradictions within the legislative design itself.
Gap 1: Vague High-Risk Application Criteria Make Self-Assessment Difficult for Institutions
Although the EU AI Act establishes a conceptual framework for high-risk AI systems and lists high-risk categories in Annex III, the boundary between "diagnostic assistance," "clinical decision support," and "mere information presentation" for medical AI remains unclear. For instance, how should the risk level of an arrhythmia detection AI be classified when it provides a "recommendation" rather than a "decision"? The authors point out that this ambiguity could lead different institutions to make vastly different compliance judgments for the same tool, creating opportunities for regulatory arbitrage. For Taiwanese companies, this means they cannot rely solely on regulatory checklists for passive compliance but must establish proactive risk assessment mechanisms.
Gap 2: Tension Between Algorithmic Transparency Requirements and Medical Practice
The EU AI Act requires high-risk AI systems to be explainable and transparent. However, the "black-box" nature of deep learning models in medical tasks like image interpretation and ECG analysis makes full transparency technically challenging. The paper notes that the current regulation fails to provide a practical compliance pathway for this technical reality. Companies face a dilemma: overemphasizing transparency may sacrifice model performance, yet performance is crucial for patient safety. This contradiction requires companies to preemptively build a "mechanism for explaining technical limitations" into their AI governance frameworks, rather than waiting for case-by-case regulatory rulings.
Gap 3: Unclear Post-Market Surveillance and Liability for Errors
The paper specifically highlights that while the EU AI Act has clear requirements for Post-Market Surveillance of AI systems after deployment, there is still a lack of clear legal guidance on how liability should be distributed among the "AI developer," "healthcare institution," and "clinician" when an error occurs in clinical use. Furthermore, disparities in resources among EU member states could lead to inconsistent enforcement standards, further complicating compliance for cross-border medical AI products.
Implications for AI Governance in Taiwan: Learning from EU Gaps to Inform Local Strategy
The three gaps revealed in the paper offer direct strategic implications for both medical AI firms and general technology companies in Taiwan: the uncertainty in EU regulations is the best reason for Taiwanese companies to proactively establish a structured AI governance framework.
First, in the context of Taiwanese law, Taiwan's Artificial Intelligence Basic Act (AI Basic Act) was officially promoted for legislation in 2024, establishing fundamental principles for AI governance, including core elements like transparency, accountability, and human oversight. These principles are highly consistent with the spirit of the EU AI Act, meaning that if Taiwanese companies can be first-movers in building a governance structure compliant with EU standards, they will simultaneously strengthen their readiness for Taiwan's AI Basic Act.
Second, the issue of "vague high-risk definitions" identified in the paper also exists in Taiwan's medical device sector. The Taiwan Food and Drug Administration's (TFDA) regulations for AI medical devices are continuously evolving. Companies that can proactively classify their AI products systematically, referencing the EU AI Act's risk classification logic, will be in an advantageous position as future regulations tighten.
Third, ISO 42001 (Artificial Intelligence Management System standard) provides an operational framework that seamlessly integrates with regulations. ISO 42001 Clause 6.1 requires companies to establish an AI risk assessment process, and Clause 8.4 requires an AI system lifecycle management process. These two requirements directly address the gaps of "high-risk definition" and "post-market surveillance" identified in the paper. In other words, a company certified under ISO 42001 is already institutionally well-prepared to meet the compliance demands of the EU AI Act.
The paper also notes that institutions with different resource levels may exhibit disparities in regulatory enforcement. This is a warning for Taiwan's small and medium-sized medical AI startups: the cost of building a compliance foundation early on is far lower than the cost of adjustments when facing market entry barriers later.
How Winners Consulting Services Helps Taiwanese Companies Turn Insights into Action
Winners Consulting Services Co., Ltd. assists Taiwanese companies in establishing AI management systems that comply with ISO 42001 and the EU AI Act, conducting AI risk classification assessments to ensure their artificial intelligence applications align with Taiwan's AI Basic Act. To address the three major medical AI governance gaps revealed in this paper, we offer the following specific services:
- High-Risk AI Application Classification and Diagnosis: Based on Annex III of the EU AI Act and the ISO 42001 risk assessment framework, we help companies systematically inventory their existing AI applications and establish internal risk level classification criteria to avoid compliance gaps caused by ambiguous definitions.
- Establishing Governance Documents for Transparency and Explainability: For AI systems that are difficult to make fully transparent, such as deep learning models, we help companies create a "mechanism for explaining technical limitations" and corresponding user notification processes to meet the EU AI Act's principled requirements for transparency.
- Designing Post-Market Surveillance Mechanisms: In accordance with the lifecycle management requirements of ISO 42001 Clause 8.4, we assist companies in establishing post-deployment performance monitoring indicators, incident reporting procedures, and liability documentation for their AI systems, preparing them in advance for post-market regulatory requirements.
Winners Consulting Services Co., Ltd. offers a free AI governance mechanism diagnosis to help Taiwanese companies establish an ISO 42001-compliant management system within 7 to 12 months.
Learn About AI Governance Services → Apply for a Free Diagnosis Now →Frequently Asked Questions
- How is a "high-risk" medical AI system defined under the EU AI Act? How should Taiwanese companies conduct a self-assessment?
- The EU AI Act's Annex III classifies AI systems in medical devices as high-risk, but the boundary between diagnostic aids and clinical decision support remains ambiguous. Taiwanese companies should assess their systems based on three dimensions: whether the AI's output directly influences clinical decisions, if a system error could cause irreversible harm to patients, and whether the AI significantly reduces a clinician's judgment space. It is advisable to establish a documented AI risk classification standard, as required by ISO 42001 Clause 6.1, and to review it regularly against evolving regulations, rather than treating it as a one-time assessment. Winners Consulting Services can assist in conducting this initial systematic risk diagnosis.
- What are the most common challenges for Taiwanese companies when aligning ISO 42001 implementation with EU AI Act compliance?
- Taiwanese companies typically face three main challenges. First, there is a gap between the transparency required by the EU AI Act and the technical architecture of current AI systems, leaving firms without practical explanatory documentation. Second, while ISO 42001 Clause 8.4 mandates complete AI lifecycle management, most companies only document the development phase and lack post-deployment monitoring mechanisms. Third, a clear mapping is still needed between the principled guidelines of Taiwan's AI Basic Act and the specific obligations of the EU AI Act. Winners Consulting Services recommends using ISO 42001 as a foundational framework to simultaneously address EU AI Act requirements, creating a "one set of documents, dual compliance" governance structure to avoid redundant efforts.
- What are the core requirements for ISO 42001 certification, and how long does the implementation process typically take?
- ISO 42001 is the international standard for an Artificial Intelligence Management System, published in 2023. Its core requirements include establishing an AI governance policy and objectives (Clause 5), a risk assessment and treatment process (Clause 6), AI system lifecycle management (Clause 8), and internal audits and management reviews (Clauses 9 & 10). A full implementation usually takes 7 to 12 months: 3 months for a gap analysis, 3-5 months for system design and documentation, and 2-4 months for internal audits, corrective actions, and third-party certification. Companies with existing ISO 9001 or ISO 27001 certifications can often shorten this timeline by 20-30%, as some management processes and documentation structures can be extended.
- How can the resource investment and expected benefits of establishing an EU AI Act compliance framework be evaluated?
- The resource investment for a medium-sized enterprise (100-500 employees) to establish a full ISO 42001 framework is roughly equivalent to 1-2 full-time employee man-years, covering consulting, training, and third-party certification audits. The benefits are threefold: first, it mitigates significant financial risk, as non-compliance with the EU AI Act can lead to fines of up to 3-7% of global annual turnover. Second, ISO 42001 certification provides a quantifiable competitive advantage in medical AI procurement evaluations. Third, a structured AI risk management system effectively reduces legal and reputational damage from AI system failures. The return on investment for proactive compliance is typically much higher than the cost of retrofitting systems in response to regulatory enforcement.
- Why choose Winners Consulting Services for assistance with AI governance issues?
- Winners Consulting Services Co., Ltd. offers three key advantages. First, we possess integrated expertise across ISO 42001 standards, EU AI Act compliance logic, and the framework of Taiwan's AI Basic Act, providing holistic governance advice rather than just formal, single-standard compliance. Second, our "diagnosis-first" methodology ensures that all recommendations address genuine governance gaps, preventing wasted resources on unnecessary paperwork. Third, we offer a free AI governance mechanism diagnosis, allowing potential clients to assess our fit before commitment. For Taiwanese medical AI companies, our team understands both clinical application scenarios and regulatory demands, enabling us to design practical governance frameworks and prepare them for ISO 42001 certification within 7 to 12 months.