ai

Insight: Comparison and Analysis of 3 Key AI Documents: EU’s Proposed

Published
Share
================================================================= ```html

Winners Consulting Services Co. Ltd. (積穗科研股份有限公司), Taiwan's expert in AI Governance, urges enterprise leaders to recognize a critical blind spot: obtaining ISO 42001 certification does not automatically mean compliance with the EU AI Act—and a landmark 2023 academic study from Trinity College Dublin has now produced the first systematic map of exactly where these three major AI governance frameworks diverge, overlap, and conflict. For Taiwanese companies exporting to the European Union, understanding this gap is no longer optional; it is a board-level risk management imperative.

Paper Citation: Comparison and Analysis of 3 Key AI Documents: EU's Proposed AI Act, Assessment List for Trustworthy AI (ALTAI), and ISO/IEC 42001 AI Management System(Delaram Golpayegani, Harshvardhan J. Pandit, David Lewis, OpenAlex — AI Governance, 2023)
Original Paper: https://doi.org/10.1007/978-3-031-26438-2_15

Read Original Paper →

About the Authors: Leading Semantic AI Governance Research from Trinity College Dublin

This paper was co-authored by three researchers at the ADAPT Centre, Trinity College Dublin, Ireland—one of Europe's foremost research institutions specializing in AI and digital content technologies. The team brings together expertise in formal knowledge representation, AI law, and trustworthy AI assessment frameworks.

Harshvardhan J. Pandit is the most prominent contributor, holding an h-index of 17 with over 925 total citations, placing him among the most frequently cited scholars in AI governance and data privacy semantics globally. He has contributed to the W3C Data Privacy Vocabulary (DPV), a standard that shapes how AI and privacy obligations are formally encoded across regulatory systems. Delaram Golpayegani holds an h-index of 8 with 211 citations and specializes in formal modeling of trustworthy AI evaluation frameworks. David Lewis leads the ADAPT Centre's work on AI policy and standards alignment.

Published in 2023, the paper has already accumulated 9 citations, including 1 high-impact citation—a strong signal of its relevance within the AI compliance research community. For Taiwanese enterprise executives, the importance of this research lies not in its technical methodology but in the fundamental governance question it answers: where do ISO 42001, the EU AI Act, and ALTAI align, where do they diverge, and what does that mean for organizations that must satisfy all three?

Three Frameworks, One Organization: The First Systematic Conflict Map for AI Governance Compliance

Enterprises today face a compounding compliance challenge. The ISO/IEC 42001 AI Management System standard, the EU Artificial Intelligence Act, and the Assessment List for Trustworthy AI (ALTAI) were each developed independently, by different bodies, with different legal natures, different vocabularies, and different organizational targets. Yet businesses—particularly those operating internationally—must navigate all three simultaneously. The core research question this paper addresses is deceptively simple but strategically vital: what are the actual gaps and conflicts between these three documents?

The researchers applied an upper-level ontology as a semantic bridging mechanism, translating the activity-related requirements across all three documents into a unified RDF (Resource Description Framework) resource graph. This approach allows the three documents—which differ fundamentally in structure and legal character—to be compared within a single semantic space with precision that qualitative comparison cannot achieve.

Core Finding One: Systematic Gaps and Overlaps Exist in Activity Requirements Across the Three Frameworks

The research identifies that ISO 42001 operates under a management systems logic—it governs how organizations establish, operate, and maintain AI governance processes—and is designed for organizational certification. The EU AI Act, by contrast, imposes mandatory legal obligations on providers and deployers of high-risk AI systems, as defined under Articles 6 through 51, with penalties reaching up to €35 million or 7% of global annual turnover. ALTAI functions as a voluntary self-assessment tool organized around seven key requirements: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity and fairness, societal and environmental wellbeing, and accountability.

The ontological comparison reveals that while all three frameworks address concepts such as risk assessment, data governance, and human oversight, they trigger these requirements under different conditions, assign them to different organizational roles, and demand different documentation artifacts. This creates a structural gap: an organization that designs its AI governance exclusively around ISO 42001 may satisfy certification requirements while still failing to meet the specific legal obligations of the EU AI Act, and vice versa.

Core Finding Two: An RDF Semantic Graph Enables Machine-Readable Compliance Path Mapping

The second major contribution of the research is the production of an RDF-format cross-framework comparison resource. Unlike a static spreadsheet comparison, an RDF resource graph is both extensible—meaning new frameworks such as Taiwan's AI Governance Act can be integrated without rebuilding the structure—and interoperable, meaning it can be directly consumed by digital compliance management systems and AI governance platforms.

For enterprise compliance teams, this signals a fundamental direction: AI governance management must evolve from manual checklist processes to digitized, semantically structured systems. Organizations that build their compliance infrastructure on interoperable, structured data frameworks will be significantly better positioned to adapt as the EU AI Act undergoes phased implementation through 2027, and as Taiwan's own regulatory landscape continues to develop under the AI Governance Act (人工智慧基本法).

Implications for Taiwan's AI Governance Practice: The Three-Framework Gap Directly Impacts Export-Oriented Enterprises

For Taiwan's internationally oriented enterprises, the framework gaps identified in this research translate directly into measurable business risk. Three sectors face the most immediate exposure.

Taiwan's technology manufacturing sector—including semiconductor design, electronics, and smart device manufacturers—increasingly embeds AI functionality into products sold in the European Union. Under the EU AI Act's extraterritorial scope, these manufacturers qualify as "providers" under Article 3(3) and are subject to the Act's full requirements for high-risk AI systems, regardless of where the company is incorporated. The Act entered into force in August 2024; provisions governing prohibited AI practices apply from February 2025, and full high-risk AI compliance requirements must be met by August 2027.

Taiwan's financial sector, including banks, insurance companies, and fintech platforms operating EU-facing services, faces particular exposure to the high-risk AI categories defined in Annex III of the EU AI Act, which explicitly includes AI systems used for creditworthiness assessment and life and health insurance risk scoring. Taiwan's AI Governance Act further requires specific categories of organizations to establish AI risk assessment mechanisms—a requirement that ISO 42001's risk assessment architecture under Clause 6.1.2 is well-positioned to satisfy, but only if the implementation also covers the EU AI Act's additional documentation and conformity assessment requirements.

The strategic implication of Golpayegani, Pandit, and Lewis (2023) for Taiwanese enterprises is therefore this: ISO 42001 certification and EU AI Act compliance are complementary but not interchangeable. Organizations that design an integrated governance mechanism addressing both simultaneously—using the cross-framework gap analysis methodology this paper introduces—will achieve full coverage with significantly less duplicated effort than organizations that approach each framework independently.

How Winners Consulting Services Co. Ltd. Helps Taiwan Enterprises Build Integrated AI Governance

積穗科研股份有限公司(Winners Consulting Services Co. Ltd.)helps Taiwan enterprises design and implement AI management systems that simultaneously satisfy ISO 42001 certification requirements, EU AI Act compliance obligations, and Taiwan AI Governance Act provisions. Our methodology is directly grounded in the cross-framework gap analysis logic introduced by Golpayegani et al. (2023), ensuring that every governance mechanism we design is built to cover intersections and resolve conflicts across all three frameworks—not just the one a client initially requests.

  1. Cross-Framework Gap Diagnosis: We begin with a structured gap analysis using ISO 42001, EU AI Act Articles 6–51, and Taiwan's AI Governance Act as simultaneous reference points. This means we identify not only what your organization is missing relative to each individual framework, but specifically where the gaps between frameworks create compliance blind spots that single-framework assessments would miss entirely. For enterprises with EU market exposure, we additionally map each AI application against EU AI Act Annex III to determine high-risk classification status.
  2. Integrated AI Management System Design: Rather than designing separate governance mechanisms for ISO 42001 and EU AI Act compliance, we build a unified AI management system architecture following the "design once, cover multiple frameworks" principle that the RDF resource graph methodology in this research makes possible. This approach reduces documentation overhead, eliminates requirement duplication, and ensures that governance investments are fully leveraged across all applicable frameworks.
  3. AI Risk Classification and High-Risk AI Identification: We assist enterprises in conducting a complete inventory of AI applications and formally classifying each against EU AI Act risk tiers (prohibited, high-risk, limited-risk, minimal-risk) and ISO 42001 Clause 6.1.2 risk assessment requirements. For high-risk AI systems, we design the required conformity assessment documentation, technical documentation, and human oversight mechanisms required under both frameworks, aligned with Taiwan AI Governance Act provisions on risk evaluation.

Winners Consulting Services Co. Ltd. offers a complimentary AI Governance Mechanism Diagnostic, designed to help Taiwan enterprises establish an ISO 42001-compliant management mechanism within 90 days.

Request Free Mechanism Diagnostic →

Frequently Asked Questions

Does ISO 42001 certification mean a company is already compliant with the EU AI Act?
No—ISO 42001 certification and EU AI Act compliance are related but distinct. ISO 42001 certifies that an organization has established, implemented, and maintains an AI management system meeting international standard requirements; it is a voluntary management system standard. The EU AI Act is a legally binding regulation that imposes specific mandatory obligations on providers and deployers of AI systems placed on the EU market, particularly for high

Was this article helpful?

Share

Related Services & Further Reading

Want to apply these insights to your enterprise?

Get a Free Assessment