Winners Consulting Services Co. Ltd. (積穗科研股份有限公司), Taiwan's expert in AI Governance, presents a landmark 2023 study that every Taiwanese enterprise leader must read before 2026: a rigorous academic analysis revealing that the EU AI Act's high-risk AI classification is not a single checklist item but a complex combination of contextual concepts—and that current international standards, including the ISO framework underpinning ISO 42001, have a critical gap in AI risk knowledge infrastructure that leaves companies dangerously exposed. With 43 citations and 4 high-impact references, this research provides both the diagnostic framework and the practical vocabulary Taiwanese companies need to navigate EU AI Act compliance, ISO 42001 certification, and Taiwan's own AI Governance Law simultaneously.
Paper Citation: To Be High-Risk, or Not To Be—Semantic Specifications and Implications of the AI Act's High-Risk AI Applications and Harmonised Standards (Delaram Golpayegani, Harshvardhan J. Pandit, David Lewis, OpenAlex — AI Governance, 2023)
Original Paper: https://doi.org/10.1145/3593013.3594050
About the Authors: Leading Voices in AI Regulatory Semantics
This paper emerges from the ADAPT Centre at Trinity College Dublin, one of Europe's foremost research institutions in AI and language technology. Lead author Delaram Golpayegani specialises in semantic modelling for AI regulation, with an h-index of 8 and over 211 citations—a profile that marks her as a rising authority in machine-readable AI compliance frameworks. Co-author Harshvardhan J. Pandit is widely recognised for his contributions to GDPR and AI Act semantic compliance tools, making complex regulatory requirements accessible for automated processing. David Lewis, the founding director of ADAPT, has shaped European policy on AI ethics and natural language technology for over two decades.
Published at the 2023 ACM Conference on Fairness, Accountability, and Transparency (FAccT)—arguably the most prestigious venue in AI ethics and governance research—the paper has accumulated 43 citations with 4 high-impact references, confirming its status as a foundational text for practitioners building AI governance frameworks. For Taiwanese executives, what matters most is that this research bridges the gap between legal text and operational compliance: it does not merely analyse what the EU AI Act says, but constructs tools that help organisations actually act on it.
The Core Discovery: High-Risk AI Is a Concept Combination, Not a Category Label
The central breakthrough of this research is deceptively simple but profoundly consequential: whether an AI system qualifies as "high-risk" under the EU AI Act is not determined by its technology type or industry sector alone, but by a specific combination of contextual concepts defined in Annex III. The research team systematically decomposed Annex III's eight application domains—biometric identification, critical infrastructure, education, employment, essential public services (including credit scoring), law enforcement, migration management, and administration of justice—to identify the "core concepts" whose intersection triggers high-risk classification.
Finding One: The VAIR Vocabulary Enables Automated Risk Identification
To operationalise their analysis, the research team developed VAIR (Vocabulary for AI Risks), an open, machine-readable vocabulary designed to represent and automate AI risk assessments. VAIR captures the key dimensions that determine high-risk status—deployment context, affected subjects, decision type, and connection to fundamental rights—in a structured format that can be integrated into audit systems, compliance workflows, and documentation tools. For Taiwanese companies building AI governance infrastructure, VAIR represents a practical blueprint: a shared language that enables legal teams, technical teams, and auditors to communicate about AI risks with precision, reducing the ambiguity that currently makes EU AI Act compliance so challenging. The vocabulary is explicitly designed for interoperability across the AI value chain, meaning it can align provider-side and deployer-side documentation—a critical requirement under EU AI Act Articles 9 through 17 for high-risk systems.
Finding Two: ISO Standards Have a Risk Knowledge Gap That Undermines Compliance Credibility
Perhaps the most urgent finding for enterprises pursuing ISO 42001 certification is the paper's assessment of current international standardisation activities. The research concludes that while ISO and IEC standards provide valuable management system frameworks, they currently lack the depth of AI risk knowledge infrastructure and impact assessment bases needed to support robust EU AI Act compliance. Since the EU AI Act relies heavily on harmonised standards for its enforcement mechanisms regarding high-risk AI, a compliance programme built solely on procedural ISO checklists—without an underlying risk knowledge base—risks being technically compliant on paper but inadequate in substance. This finding means Taiwanese companies should treat ISO 42001 certification not as a destination but as a foundation: the certification provides the governance architecture, but enterprises must additionally invest in building substantive AI risk knowledge, aligned with frameworks like VAIR, to achieve genuine and defensible compliance.
Implications for Taiwanese Enterprises: Three Converging Regulatory Pressures
Taiwanese enterprises now face an unprecedented convergence of AI governance requirements from three directions, each with distinct timelines and enforcement mechanisms.
EU AI Act—the extraterritorial imperative. The EU AI Act entered into force on 1 August 2024. Prohibitions on unacceptable-risk AI systems apply from 2 February 2025. High-risk AI system obligations under Annex III apply from 2 August 2026. Any Taiwanese company whose products, services, or AI outputs reach EU markets—whether directly or through supply chains—must comply. Penalties for high-risk AI violations reach 3% of global annual turnover or €15 million (whichever is higher); violations involving prohibited AI systems carry penalties up to 6% of global turnover or €30 million. The VAIR-based analytical approach in this paper gives Taiwanese enterprises a structured methodology to determine, with defensible documentation, which of their AI systems trigger these obligations.
ISO 42001—the certification competitive advantage. Published in December 2023, ISO 42001 (Artificial Intelligence Management Systems) has rapidly become the primary credential for demonstrating AI governance maturity to global customers, partners, and regulators. The paper's finding about ISO's risk knowledge gap is a call to action: Taiwanese companies pursuing ISO 42001 should supplement the management system framework with a substantive AI risk vocabulary and impact assessment process. This dual approach—governance architecture plus risk knowledge infrastructure—positions enterprises for both ISO 42001 certification and EU AI Act compliance simultaneously, maximising the return on governance investment.
Taiwan AI Governance Law—the domestic alignment. Taiwan's draft AI Governance Law (人工智慧基本法) adopts a risk-tiered management approach substantially aligned with the EU AI Act's philosophy. The core concept analysis methodology developed in this paper—decomposing AI applications into contextual dimensions to determine risk level—applies equally to interpreting Taiwan's "high-impact AI application" standards. Taiwanese companies that build their risk assessment capability on this foundation will be positioned to satisfy both domestic and international regulatory requirements without duplicating effort.
Winners Consulting Services Co. Ltd.: Translating Research Insights into Taiwan AI Governance Action
積穗科研股份有限公司(Winners Consulting Services Co. Ltd.)helps Taiwanese enterprises build AI management systems that satisfy ISO 42001, EU AI Act, and Taiwan AI Governance Law requirements simultaneously. Drawing directly from the research insights in this paper, we recommend three priority actions:
- Conduct a structured AI system inventory using Annex III concept combination analysis: Map every AI application against the eight high-risk domains in EU AI Act Annex III and apply the core concept combination framework from this paper—evaluating deployment context, affected subjects, decision type, and fundamental rights connections—to produce a defensible, documented risk classification for each system. This exercise simultaneously fulfils ISO 42001's risk assessment requirements and prepares the foundation for EU AI Act compliance documentation.
- Establish an enterprise AI risk vocabulary aligned with VAIR principles: Implement a shared internal vocabulary for AI risk description that enables legal, technical, and audit functions to communicate with precision. This vocabulary should be embedded in your ISO 42001 documentation system and designed for machine readability to support future automation of compliance monitoring—addressing directly the interoperability gaps identified in this research.
- Build a dual-layer compliance architecture: governance framework plus risk knowledge base: Do not treat ISO 42001 certification as the final destination. Complement the management system with a substantive AI risk knowledge infrastructure—including impact assessment templates, audit integration protocols, and cross-framework mapping tables for EU AI Act and Taiwan AI Governance Law—to achieve the depth of compliance that regulators, customers, and partners will increasingly demand from 2026 onward.
Winners Consulting Services Co. Ltd. offers a free AI governance mechanism diagnostic, helping Taiwanese enterprises establish an ISO 42001-compliant management system within 90 days.
Apply for Free AI Governance Diagnostic →Frequently Asked Questions
- How do we determine whether our AI systems qualify as high-risk under the EU AI Act?
- Start by mapping each AI application against the eight domains in EU AI Act Annex III. This paper's key contribution is showing that high-risk classification is triggered by a combination of core concepts—deployment context, affected subject types, decision-making role, and connection to fundamental rights—not by technology type alone. For example, an AI-based resume screening tool touches the "employment and workers management" domain (Annex III, point 4) and involves decisions affecting individuals' access to employment, making it a strong candidate for high-risk classification. Winners Consulting Services Co. Ltd. provides structured risk classification workshops that apply this concept combination methodology to each enterprise's specific AI portfolio, producing documented risk determinations that satisfy both EU AI Act Article 9 requirements and ISO 42001 risk assessment procedures.
- What are the most common compliance challenges Taiwanese companies face when implementing ISO 42001?
- The three most common challenges are: incomplete AI system inventories (many organisations undercount their AI applications, particularly those embedded in third-party tools or SaaS platforms), fragmented risk vocabulary (technical teams and legal teams use incompatible terminology, producing documentation that fails audit scrutiny), and difficulty demonstrating meaningful compliance rather than procedural compliance. This paper directly addresses the second challenge by showing that a shared, structured AI risk vocabulary is a prerequisite for effective governance—not an optional sophistication. ISO 42001's Annex A provides 33 AI-specific control objectives, but without an underlying risk language that connects these controls to specific AI deployment contexts (as VAIR does), the controls remain disconnected from actual risk realities. Winners Consulting Services Co. Ltd. integrates ISO 42001, EU AI Act, and Taiwan AI Governance Law requirements into a unified compliance architecture to address all three challenges simultaneously.
- What are the core requirements of ISO 42001, and how long does implementation take for a Taiwanese company?
- ISO 42001 requires organisations to establish, implement, maintain, and continually improve an Artificial Intelligence Management System (AIMS). Core requirements include: leadership commitment and governance structure, AI risk assessment and treatment processes, AI system lifecycle management, supply chain responsibility management, personnel competence development, and ongoing monitoring and improvement mechanisms. The standard's Annex A specifies 33 AI-specific control objectives covering responsible AI design, data governance, transparency, and human oversight. Implementation timelines vary: small-to-medium enterprises typically require 6 to 9 months for initial system establishment; larger organisations or those with
Was this article helpful?
Related Services & Further Reading
Want to apply these insights to your enterprise?
Get a Free Assessment