ai

Healthcare AI Data Governance Under EU AI Act: ISO 42001 Compliance Guide for Taiwan Enterprises

Published
Share

Winners Consulting Services Co., Ltd. points out that a new study published in arXiv in 2025, "Data Governance in Healthcare AI: Navigating the EU AI Act's Requirements," reveals a core contradiction that Taiwanese medical technology companies must confront: the data governance compliance requirements for high-risk AI systems under the EU AI Act are far more complex than companies anticipate. Furthermore, the data management frameworks of most current healthcare institutions cannot effectively meet the dual requirements of ISO 42001 and the EU AI Act. This study, which has already been cited 6 times, offers direct reference value for Taiwan's medical AI export strategy.

Source Paper: Data Governance in Healthcare AI: Navigating the EU AI Act's Requirements. (Konstantinos Kalodanis, G. Feretzakis, Panagiotis Rizomiliotis, arXiv, 2025)
Original Link: https://doi.org/10.3233/SHTI250050

Read Original Paper →

About the Authors and This Study

The lead author, Konstantinos Kalodanis, has an academic h-index of 6 and a total of 122 citations, demonstrating a solid foundation in data governance and information security. Co-authors G. Feretzakis and Panagiotis Rizomiliotis have expertise in health informatics and information security, respectively. Together, the three authors approach the topic from the practical perspective of medical data governance, examining the specific compliance requirements of the EU AI Act for healthcare institutions and medical technology companies.

Notably, this paper's perspective is not purely a legal interpretation but an attempt to map the regulatory requirements of the EU AI Act to the operational level of medical data governance—this is precisely the kind of concrete guidance that Taiwanese medical AI companies lack when evaluating entry into the EU market. The paper has been cited 6 times, marking it as a noteworthy early result among similar studies in 2025 that warrants continued attention.

From an evaluator's standpoint, the study's contribution lies in building a bridge: it attempts to translate abstract legal articles into executable standards for medical data governance. However, we must also objectively point out that since the authors' primary frame of reference is the European healthcare system, the applicability of some recommendations to Taiwan's decentralized healthcare system (including the NHIA, various levels of hospitals, and med-tech startups) will require localization.

The Three Tiers of Compliance Pressure in Medical AI Data Governance: What Does the EU AI Act Require?

The core insight of this study is that the EU AI Act's data governance requirements for medical AI are not one-dimensional but consist of three interconnected tiers. Companies must address all three simultaneously to truly meet regulatory expectations.

Key Finding 1: Data Quality Has Been Elevated to a Legal Obligation

The EU AI Act explicitly requires that high-risk AI systems (which, according to Annex III of the Act, include AI for medical diagnosis and treatment support) must establish strict quality control mechanisms for training data. This is not just a technical requirement but a legal responsibility. The paper points out that training data for medical AI must be representative, unbiased, and traceable, yet most current healthcare data warehouse architectures are not designed for this. In other words, if companies do not establish a formal data ownership mechanism and data governance framework, they will face a substantial compliance gap when the EU AI Act fully comes into effect (August 2026). ISO 42001 Clause 6.1.2 requires companies to conduct AI risk assessments, with data quality risk being a core evaluation item.

Key Finding 2: A Gap Exists Between Transparency/Explainability Requirements and Clinical Reality

The study notes that the EU AI Act requires high-risk AI systems to provide sufficient transparency and explainability to support Human Oversight. However, deep learning models, particularly in scenarios like medical image recognition and pathological analysis, inherently have a "black box" problem. The paper objectively acknowledges this as an unresolved tension between technology and regulation: the law demands explainability, but the highest-performing models are often the most difficult to explain. When developing medical AI products, Taiwanese companies must make conscious design trade-offs between model performance and explainability and document these decisions to comply with the transparency requirements of Article 13 of the EU AI Act and the documentation management requirements of ISO 42001.

Key Finding 3: Assigning Responsibility for Data Governance is Particularly Complex in the Healthcare Ecosystem

Medical data often involves multiple stakeholders: patients, healthcare institutions, medical technology companies, and regulatory authorities. The division of responsibilities between the Data Controller and Data Processor, as defined by the EU AI Act and GDPR, becomes extremely complex in the multi-party collaboration scenarios of medical AI. The paper emphasizes that companies must clearly define data ownership and liability scopes at the contractual level; otherwise, in the event of a violation, liability attribution will become a major legal risk.

Implications for AI Governance in Taiwan: Three Actions That Cannot Wait

For Taiwanese medical AI companies planning to enter the EU market or collaborate with EU healthcare institutions, this study's findings mean that compliance preparations must start now, not when the EU AI Act fully takes effect.

First, Taiwan's Ministry of Health and Welfare (MOHW) already began promoting a medical AI governance framework in 2024, requiring companies to establish algorithm transparency and validation mechanisms, a direction highly consistent with the EU AI Act. Taiwan's AI Basic Act has also established fundamental principles for AI applications, including transparency, accountability, and human oversight, which align perfectly with the core requirements of Articles 13 and 14 of the EU AI Act. In other words, if Taiwanese companies can establish an AI management system compliant with ISO 42001, they can use the same framework to address the dual requirements of Taiwan's AI Basic Act and the EU AI Act, which is the most efficient compliance path.

Second, from a constructive critique perspective, we must point out a methodological limitation of this paper: the research assumes a centralized European medical records system (like the NHS or Nordic EHRs) as its default context. In contrast, Taiwan's medical data reality is characterized by high fragmentation and heterogeneity among the National Health Insurance database, individual hospitals' HIS systems, and startups' AI platforms. This means that when applying the paper's recommendations, Taiwanese companies will need an additional layer of "distributed data governance" design. The ISO 42001 framework provides the flexibility for this—companies can design appropriate governance controls based on their own data architecture rather than force-fitting a centralized European model.

Third, AI governance is not just a technical issue but an organizational one. The paper stresses the need for healthcare institutions to establish a cross-functional AI governance committee (including legal, clinical, IT, and compliance departments), which is perfectly aligned with the spirit of ISO 42001 Clause 5.1, requiring top management to demonstrate commitment to governance. When establishing their AI governance structures, Taiwanese companies should incorporate this organizational design into their early planning.

How Winners Consulting Services Can Help Taiwanese Medical AI Companies Build a Dual-Compliance Framework

Winners Consulting Services Co., Ltd. helps Taiwanese companies establish AI management systems that comply with ISO 42001 and the EU AI Act, conduct AI risk classification assessments, and ensure that their artificial intelligence applications align with the regulations of Taiwan's AI Basic Act. In response to the three tiers of compliance pressure in medical AI data governance revealed by this paper, we offer the following concrete action recommendations:

  1. Initiate a Medical AI Data Governance Gap Analysis: Assess whether your company's medical AI product falls into the high-risk category by referencing the definitions in Annex III of the EU AI Act. Identify gaps in your current data quality controls, data ownership attribution, and documentation management. Winners Consulting Services provides a structured GAP analysis tool to help companies complete this initial diagnosis within 4 to 6 weeks.
  2. Establish an ISO 42001-Compliant AI Data Governance Mechanism: Based on the paper's key findings, prioritize the establishment of management mechanisms compliant with Clauses 6 to 8 of ISO 42001, focusing on training data quality control, model explainability documentation, and multi-party liability contract design. Our consulting services can help companies complete system implementation and certification preparation within 7 to 12 months.
  3. Establish a Cross-Functional AI Governance Committee: In accordance with ISO 42001 Clause 5, we assist companies in designing a cross-functional governance structure that includes legal, clinical advisors, information security, and a chief compliance officer. This ensures that AI risk management decisions have sufficient organizational support and can effectively meet the human oversight requirements of the EU AI Act and the accountability requirements of Taiwan's AI Basic Act.

Winners Consulting Services Co., Ltd. offers a free AI governance mechanism diagnosis to help Taiwanese companies establish an ISO 42001-compliant management system in 7 to 12 months.

Learn About AI Governance Services → Apply for a Free Diagnosis Now →

Frequently Asked Questions

Are medical AI systems always classified as high-risk under the EU AI Act? How can Taiwanese companies confirm their product's classification?
Under Annex III of the EU AI Act, AI systems intended for medical diagnosis, treatment recommendations, or patient risk assessment are generally classified as high-risk AI systems, subjecting them to the strictest compliance obligations. Taiwanese companies can confirm their classification through a three-step process: first, check against the specific use cases listed in Annex III; second, assess whether the AI system poses a significant risk to health or safety; and third, determine if the system is subject to a third-party conformity assessment as a medical device. If a product falls into the high-risk category, the company must establish technical documentation, data governance mechanisms, explainability statements, and human oversight procedures compliant with Articles 9 to 15 of the Act. It is advisable to engage professional consultants for a risk classification assessment early to avoid discovering compliance gaps after market launch.
What are the most common compliance challenges for Taiwanese medical AI companies when implementing ISO 42001?
Taiwanese medical AI companies typically face three main challenges when implementing ISO 42001. First is an incomplete data governance framework: many companies lack formal data ownership policies and data quality control processes, failing to meet the data management requirements of ISO 42001 Clause 8.4 and the legal requirements for training data quality under Article 10 of the EU AI Act. Second is a non-systematic risk assessment process: while many firms conduct technical security tests, they often lack a structured mechanism for evaluating ethical, bias, and compliance risks as required by ISO 42001 Clause 6.1.2. Third is insufficient documentation: the EU AI Act requires that technical documentation for high-risk AI be available for inspection, and Taiwan's AI Basic Act also emphasizes transparency and accountability, yet many Taiwanese companies lack the habit of documenting key decisions during development.
What are the core requirements of ISO 42001 certification, and how long does it typically take for a Taiwanese company to complete?
ISO 42001, the world's first international standard for AI management systems, has five core requirements: top management commitment (Clause 5), assessment of AI risks and opportunities (Clause 6.1), AI objectives and planning (Clause 6.2), AI system lifecycle management (Clause 8), and a continual improvement mechanism (Clause 10). For a Taiwanese company, the process from the initial GAP analysis to passing the third-party certification audit typically takes 7 to 12 months. The first 3 months focus on diagnosis and policy design; months 4 to 8 are for mechanism establishment and documentation; and months 9 to 12 are for internal audits and external validation. The timeline can vary based on company size and AI application complexity, with medical AI companies often needing the full 12-month cycle to align with the EU AI Act's technical documentation requirements.
How should Taiwanese companies evaluate the costs and expected benefits of establishing a medical AI governance framework?
The initial investment for establishing a medical AI governance framework compliant with ISO 42001 and the EU AI Act primarily includes consulting fees, internal training costs, and labor for system documentation. For a mid-sized med-tech company, implementation costs typically range from NT$1 million to NT$3 million. However, the benefits are significant across at least three dimensions. First, non-compliance with the EU AI Act can result in fines of up to 3% to 6% of global annual turnover, a risk that compliance investment effectively mitigates. Second, ISO 42001 certification serves as a trust credential for entering the EU market, shortening customer due diligence cycles. Third, Taiwan's MOHW now requires medical AI products to demonstrate governance credibility, and certification can help accelerate local market approval. For medical AI companies with EU market ambitions, the return on compliance investment is quite clear.
Why choose Winners Consulting Services for assistance with AI governance issues?
Winners Consulting Services Co., Ltd. specializes in AI governance and ISO 42001 compliance consulting, offering cross-disciplinary expertise to address ISO standards, EU AI Act regulations, and Taiwan's AI Basic Act requirements simultaneously. Our consulting team possesses a hybrid background in information security, regulatory compliance, and AI technology, enabling us to help companies establish a third-party verifiable AI management system within 7 to 12 months, rather than just providing document templates. For medical AI companies, we offer additional services tailored to the EU AI Act's high-risk classification, including technical documentation design, data governance framework planning, and alignment with the MOHW's medical AI governance framework. We believe that effective AI governance is not a compliance cost but a strategic asset for building market trust.

Was this article helpful?

Share

Related Services & Further Reading

Want to apply these insights to your enterprise?

Get a Free Assessment