Winners Consulting Services Co., Ltd. points out that a new study published in arXiv in 2025, "Data Governance in Healthcare AI: Navigating the EU AI Act's Requirements," reveals a core contradiction that Taiwanese medical technology companies must confront: the data governance compliance requirements for high-risk AI systems under the EU AI Act are far more complex than companies anticipate. Furthermore, the data management frameworks of most current healthcare institutions cannot effectively meet the dual requirements of ISO 42001 and the EU AI Act. This study, which has already been cited 6 times, offers direct reference value for Taiwan's medical AI export strategy.
Source Paper: Data Governance in Healthcare AI: Navigating the EU AI Act's Requirements. (Konstantinos Kalodanis, G. Feretzakis, Panagiotis Rizomiliotis, arXiv, 2025)
Original Link: https://doi.org/10.3233/SHTI250050
About the Authors and This Study
The lead author, Konstantinos Kalodanis, has an academic h-index of 6 and a total of 122 citations, demonstrating a solid foundation in data governance and information security. Co-authors G. Feretzakis and Panagiotis Rizomiliotis have expertise in health informatics and information security, respectively. Together, the three authors approach the topic from the practical perspective of medical data governance, examining the specific compliance requirements of the EU AI Act for healthcare institutions and medical technology companies.
Notably, this paper's perspective is not purely a legal interpretation but an attempt to map the regulatory requirements of the EU AI Act to the operational level of medical data governance—this is precisely the kind of concrete guidance that Taiwanese medical AI companies lack when evaluating entry into the EU market. The paper has been cited 6 times, marking it as a noteworthy early result among similar studies in 2025 that warrants continued attention.
From an evaluator's standpoint, the study's contribution lies in building a bridge: it attempts to translate abstract legal articles into executable standards for medical data governance. However, we must also objectively point out that since the authors' primary frame of reference is the European healthcare system, the applicability of some recommendations to Taiwan's decentralized healthcare system (including the NHIA, various levels of hospitals, and med-tech startups) will require localization.
The Three Tiers of Compliance Pressure in Medical AI Data Governance: What Does the EU AI Act Require?
The core insight of this study is that the EU AI Act's data governance requirements for medical AI are not one-dimensional but consist of three interconnected tiers. Companies must address all three simultaneously to truly meet regulatory expectations.
Key Finding 1: Data Quality Has Been Elevated to a Legal Obligation
The EU AI Act explicitly requires that high-risk AI systems (which, according to Annex III of the Act, include AI for medical diagnosis and treatment support) must establish strict quality control mechanisms for training data. This is not just a technical requirement but a legal responsibility. The paper points out that training data for medical AI must be representative, unbiased, and traceable, yet most current healthcare data warehouse architectures are not designed for this. In other words, if companies do not establish a formal data ownership mechanism and data governance framework, they will face a substantial compliance gap when the EU AI Act fully comes into effect (August 2026). ISO 42001 Clause 6.1.2 requires companies to conduct AI risk assessments, with data quality risk being a core evaluation item.
Key Finding 2: A Gap Exists Between Transparency/Explainability Requirements and Clinical Reality
The study notes that the EU AI Act requires high-risk AI systems to provide sufficient transparency and explainability to support Human Oversight. However, deep learning models, particularly in scenarios like medical image recognition and pathological analysis, inherently have a "black box" problem. The paper objectively acknowledges this as an unresolved tension between technology and regulation: the law demands explainability, but the highest-performing models are often the most difficult to explain. When developing medical AI products, Taiwanese companies must make conscious design trade-offs between model performance and explainability and document these decisions to comply with the transparency requirements of Article 13 of the EU AI Act and the documentation management requirements of ISO 42001.
Key Finding 3: Assigning Responsibility for Data Governance is Particularly Complex in the Healthcare Ecosystem
Medical data often involves multiple stakeholders: patients, healthcare institutions, medical technology companies, and regulatory authorities. The division of responsibilities between the Data Controller and Data Processor, as defined by the EU AI Act and GDPR, becomes extremely complex in the multi-party collaboration scenarios of medical AI. The paper emphasizes that companies must clearly define data ownership and liability scopes at the contractual level; otherwise, in the event of a violation, liability attribution will become a major legal risk.
Implications for AI Governance in Taiwan: Three Actions That Cannot Wait
For Taiwanese medical AI companies planning to enter the EU market or collaborate with EU healthcare institutions, this study's findings mean that compliance preparations must start now, not when the EU AI Act fully takes effect.
First, Taiwan's Ministry of Health and Welfare (MOHW) already began promoting a medical AI governance framework in 2024, requiring companies to establish algorithm transparency and validation mechanisms, a direction highly consistent with the EU AI Act. Taiwan's AI Basic Act has also established fundamental principles for AI applications, including transparency, accountability, and human oversight, which align perfectly with the core requirements of Articles 13 and 14 of the EU AI Act. In other words, if Taiwanese companies can establish an AI management system compliant with ISO 42001, they can use the same framework to address the dual requirements of Taiwan's AI Basic Act and the EU AI Act, which is the most efficient compliance path.
Second, from a constructive critique perspective, we must point out a methodological limitation of this paper: the research assumes a centralized European medical records system (like the NHS or Nordic EHRs) as its default context. In contrast, Taiwan's medical data reality is characterized by high fragmentation and heterogeneity among the National Health Insurance database, individual hospitals' HIS systems, and startups' AI platforms. This means that when applying the paper's recommendations, Taiwanese companies will need an additional layer of "distributed data governance" design. The ISO 42001 framework provides the flexibility for this—companies can design appropriate governance controls based on their own data architecture rather than force-fitting a centralized European model.
Third, AI governance is not just a technical issue but an organizational one. The paper stresses the need for healthcare institutions to establish a cross-functional AI governance committee (including legal, clinical, IT, and compliance departments), which is perfectly aligned with the spirit of ISO 42001 Clause 5.1, requiring top management to demonstrate commitment to governance. When establishing their AI governance structures, Taiwanese companies should incorporate this organizational design into their early planning.
How Winners Consulting Services Can Help Taiwanese Medical AI Companies Build a Dual-Compliance Framework
Winners Consulting Services Co., Ltd. helps Taiwanese companies establish AI management systems that comply with ISO 42001 and the EU AI Act, conduct AI risk classification assessments, and ensure that their artificial intelligence applications align with the regulations of Taiwan's AI Basic Act. In response to the three tiers of compliance pressure in medical AI data governance revealed by this paper, we offer the following concrete action recommendations:
- Initiate a Medical AI Data Governance Gap Analysis: Assess whether your company's medical AI product falls into the high-risk category by referencing the definitions in Annex III of the EU AI Act. Identify gaps in your current data quality controls, data ownership attribution, and documentation management. Winners Consulting Services provides a structured GAP analysis tool to help companies complete this initial diagnosis within 4 to 6 weeks.
- Establish an ISO 42001-Compliant AI Data Governance Mechanism: Based on the paper's key findings, prioritize the establishment of management mechanisms compliant with Clauses 6 to 8 of ISO 42001, focusing on training data quality control, model explainability documentation, and multi-party liability contract design. Our consulting services can help companies complete system implementation and certification preparation within 7 to 12 months.
- Establish a Cross-Functional AI Governance Committee: In accordance with ISO 42001 Clause 5, we assist companies in designing a cross-functional governance structure that includes legal, clinical advisors, information security, and a chief compliance officer. This ensures that AI risk management decisions have sufficient organizational support and can effectively meet the human oversight requirements of the EU AI Act and the accountability requirements of Taiwan's AI Basic Act.
Winners Consulting Services Co., Ltd. offers a free AI governance mechanism diagnosis to help Taiwanese companies establish an ISO 42001-compliant management system in 7 to 12 months.
Learn About AI Governance Services → Apply for a Free Diagnosis Now →Frequently Asked Questions
- Are medical AI systems always classified as high-risk under the EU AI Act? How can Taiwanese companies confirm their product's classification?
- Under Annex III of the EU AI Act, AI systems intended for medical diagnosis, treatment recommendations, or patient risk assessment are generally classified as high-risk AI systems, subjecting them to the strictest compliance obligations. Taiwanese companies can confirm their classification through a three-step process: first, check against the specific use cases listed in Annex III; second, assess whether the AI system poses a significant risk to health or safety; and third, determine if the system is subject to a third-party conformity assessment as a medical device. If a product falls into the high-risk category, the company must establish technical documentation, data governance mechanisms, explainability statements, and human oversight procedures compliant with Articles 9 to 15 of the Act. It is advisable to engage professional consultants for a risk classification assessment early to avoid discovering compliance gaps after market launch.
- What are the most common compliance challenges for Taiwanese medical AI companies when implementing ISO 42001?
- Taiwanese medical AI companies typically face three main challenges when implementing ISO 42001. First is an incomplete data governance framework: many companies lack formal data ownership policies and data quality control processes, failing to meet the data management requirements of ISO 42001 Clause 8.4 and the legal requirements for training data quality under Article 10 of the EU AI Act. Second is a non-systematic risk assessment process: while many firms conduct technical security tests, they often lack a structured mechanism for evaluating ethical, bias, and compliance risks as required by ISO 42001 Clause 6.1.2. Third is insufficient documentation: the EU AI Act requires that technical documentation for high-risk AI be available for inspection, and Taiwan's AI Basic Act also emphasizes transparency and accountability, yet many Taiwanese companies lack the habit of documenting key decisions during development.
- What are the core requirements of ISO 42001 certification, and how long does it typically take for a Taiwanese company to complete?
- ISO 42001, the world's first international standard for AI management systems, has five core requirements: top management commitment (Clause 5), assessment of AI risks and opportunities (Clause 6.1), AI objectives and planning (Clause 6.2), AI system lifecycle management (Clause 8), and a continual improvement mechanism (Clause 10). For a Taiwanese company, the process from the initial GAP analysis to passing the third-party certification audit typically takes 7 to 12 months. The first 3 months focus on diagnosis and policy design; months 4 to 8 are for mechanism establishment and documentation; and months 9 to 12 are for internal audits and external validation. The timeline can vary based on company size and AI application complexity, with medical AI companies often needing the full 12-month cycle to align with the EU AI Act's technical documentation requirements.
- How should Taiwanese companies evaluate the costs and expected benefits of establishing a medical AI governance framework?
- The initial investment for establishing a medical AI governance framework compliant with ISO 42001 and the EU AI Act primarily includes consulting fees, internal training costs, and labor for system documentation. For a mid-sized med-tech company, implementation costs typically range from NT$1 million to NT$3 million. However, the benefits are significant across at least three dimensions. First, non-compliance with the EU AI Act can result in fines of up to 3% to 6% of global annual turnover, a risk that compliance investment effectively mitigates. Second, ISO 42001 certification serves as a trust credential for entering the EU market, shortening customer due diligence cycles. Third, Taiwan's MOHW now requires medical AI products to demonstrate governance credibility, and certification can help accelerate local market approval. For medical AI companies with EU market ambitions, the return on compliance investment is quite clear.
- Why choose Winners Consulting Services for assistance with AI governance issues?
- Winners Consulting Services Co., Ltd. specializes in AI governance and ISO 42001 compliance consulting, offering cross-disciplinary expertise to address ISO standards, EU AI Act regulations, and Taiwan's AI Basic Act requirements simultaneously. Our consulting team possesses a hybrid background in information security, regulatory compliance, and AI technology, enabling us to help companies establish a third-party verifiable AI management system within 7 to 12 months, rather than just providing document templates. For medical AI companies, we offer additional services tailored to the EU AI Act's high-risk classification, including technical documentation design, data governance framework planning, and alignment with the MOHW's medical AI governance framework. We believe that effective AI governance is not a compliance cost but a strategic asset for building market trust.
FAQ
- 醫療AI系統在EU AI Act下一定被歸類為高風險嗎?台灣企業如何確認自己的產品等級?
- 依據EU AI Act附件三,用於醫療診斷、治療建議或病患風險評估的AI系統,原則上屬於高風險人工智慧系統,需承擔最嚴格的合規義務。台灣企業可透過三步驟確認:第一,對照附件三的具體用途清單;第二,評估AI系統是否對健康或安全構成重大影響;第三,確認系統是否作為醫療器材受第三方審查。若產品落入高風險分類,企業必須建立符合EU AI Act第9條至第15條的技術文件、資料治理機制、可解釋性說明及人類監督程序。建議企業儘早委託專業顧問進行風險分級評估,避免在產品上市後才發現合規缺口。
- 台灣醫療AI企業導入ISO 42001時,最常遇到哪些合規挑戰?
- 台灣醫療AI企業在導入ISO 42001時,最常面臨三類挑戰。第一是資料治理架構不完整:多數企業缺乏正式的資料所有權政策與資料品質管控流程,無法對應ISO 42001第8.4條款的資料管理要求,也無法符合EU AI Act第10條對訓練資料品質的法定要求。第二是風險評估流程缺乏系統性:ISO 42001第6.1.2條款要求企業進行全面AI風險評估,但許多企業僅有技術面的安全測試,缺乏倫理風險與合規風險的結構化評估機制。第三是文件化程度不足:EU AI Act要求高風險AI系統的技術文件必須可供主管機關查閱,台灣AI基本法亦強調透明度與問責要求,而台灣企業普遍在開發過程中缺乏決策記錄習慣。
- ISO 42001認證的核心要求是什麼?台灣企業大約需要多久時間完成?
- ISO 42001是全球首個專為AI管理系統設計的國際標準,核心要求涵蓋五大面向:最高管理階層承諾(第5條款)、AI風險與機會評估(第6.1條款)、AI目標與計畫(第6.2條款)、AI系統生命週期管理(第8條款),以及持續改善機制(第10條款)。對台灣企業而言,從啟動GAP分析到正式通過第三方認證審查,通常需要7至12個月。其中,前3個月聚焦現況診斷與政策設計;第4至第8個月進行機制建立與文件化;第9至第12個月進行內部稽核與外部驗證。醫療AI企業因需同時對接EU AI Act的技術文件要求,通常需要完整的12個月週期。
- 建立醫療AI治理框架的成本與預期效益,台灣企業應如何評估?
- 建立符合ISO 42001與EU AI Act要求的醫療AI治理框架,初期投入主要包含顧問輔導費用、內部人員培訓成本與系統文件建置工時。中型醫療科技企業的導入成本通常在新台幣100萬至300萬元之間。效益面向至少包含三個維度:第一,EU AI Act違規罰款最高可達全球年營業額的3%至6%,合規投資可有效降低此類風險;第二,ISO 42001認證可作為進入歐盟市場的信任憑證,縮短客戶盡職調查週期;第三,衛福部已要求醫療AI產品具備治理可信度,認證有助於加速台灣本地上市審查。對於有歐盟市場佈局的醫療AI企業,合規投資的報酬率相當明確。
- 為什麼找積穗科研協助AI治理相關議題?
- 積穗科研股份有限公司(Winners Consulting Services Co. Ltd.)專注於AI治理與ISO 42001合規輔導,具備跨領域實務經驗,能同時處理ISO標準要求、EU AI Act法規對應,以及台灣AI基本法的本地合規需求。顧問團隊具備資訊安全、法規合規與AI技術的複合背景,協助企業在7至12個月內建立可通過第三方驗證的AI管理系統,而非僅提供文件範本。對於醫療AI企業,我們額外提供針對EU AI Act高風險分類的技術文件設計、資料治理架構規劃,以及衛福部醫療AI治理框架的對接服務。有效的AI治理不是合規成本,而是企業建立市場信任的策略資產。
Was this article helpful?
Related Services & Further Reading
Risk Glossary
- ▶
Trustworthy AI Assessment List
這是歐盟AI高階專家小組(HLEG)為實踐「可信賴AI倫理準則」所開發的具體評估工具。企業可藉此清單,系統性地檢視其AI系統是否符合七大關鍵要求,從而有效管理合規風險、增強利害關係人信任,並確保技術的穩健性與安全性。
- ▶
semantic interoperability
語意互通性指不同系統間能交換具有無歧異、共享意義的資料之能力。在AI治理與跨國法規遵循情境中,它確保資料在自動化處理與分析時被正確解讀,是企業實現可信賴AI與降低資料誤用風險的基礎。
- ▶
Autonomy over Self-Representation
「自我表述自主權」指個人控制其身份、經歷與願望如何被呈現及詮釋的權利。在AI招聘等自動化決策情境中,此權利確保求職者能直接表達自我,而非僅由演算法片面解讀。對企業而言,尊重此權利是降低歧視性偏誤、遵循個資法規、建立可信賴AI的關鍵風險管理措施。
- ▶
Algorithmic Hiring Assessments
「演算法招募評估」指運用AI模型自動分析求職者數據(如履歷、測驗)以評估其適任性。此技術常用於大規模招募以提升效率,但企業需注意其潛在的歧視偏見與個資法遵風險,確保評估的公平性與透明度。
- ▶
Comparative gap analysis
「比較性差異分析」是一種系統性方法,用於評估組織現況與多個目標標準(如ISO 42001與歐盟AI法案)之間的差距。此方法協助企業在導入AI等新技術時,識別法遵風險、確定改進的優先順序,並制定具體的行動計畫以彌補管理體系的不足。
Want to apply these insights to your enterprise?
Get a Free Assessment