Winners Consulting Services Co. Ltd. (積穗科研股份有限公司), Taiwan's expert in AI Governance, alerts enterprise leaders to a critical compliance blind spot: the EU AI Act's definition of "high-risk AI" is far more nuanced than a simple checklist of application types. A landmark 2023 academic study—cited 43 times and co-authored by researchers at Trinity College Dublin—reveals that determining high-risk status requires analyzing a structured combination of core semantic concepts, and proposes an open AI risk vocabulary (VAIR) that can be integrated with ISO 42001 frameworks and automated compliance workflows. For Taiwanese enterprises exporting to EU markets or building AI governance systems under Taiwan's emerging AI Basic Law, this research provides the most operationally rigorous foundation available.
Paper Citation: To Be High-Risk, or Not To Be—Semantic Specifications and Implications of the AI Act's High-Risk AI Applications and Harmonised Standards(Delaram Golpayegani, Harshvardhan J. Pandit, David Lewis, OpenAlex — AI Governance, 2023)
Original Paper: https://doi.org/10.1145/3593013.3594050
About the Authors and Their Research Impact
This paper was presented at the 2023 ACM Conference on Fairness, Accountability, and Transparency (FAccT)—one of the most prestigious venues for AI governance research globally. It has accumulated 43 citations since publication, with 4 classified as high-impact citations, reflecting its significance in the rapidly evolving field of AI regulatory compliance.
Delaram Golpayegani is a researcher at the ADAPT Centre, Trinity College Dublin, specializing in semantic technologies for AI regulatory compliance. With an h-index of 8 and 211 cumulative citations, her work bridges the gap between legal text and machine-executable compliance specifications—a capability that is increasingly essential as AI regulations grow more complex across jurisdictions.
Harshvardhan J. Pandit is one of Europe's most influential researchers at the intersection of AI governance, data privacy, and semantic web technologies. With an h-index of 17 and 925 cumulative citations, Pandit leads development of the Data Privacy Vocabulary (DPV), which has been adopted as a W3C Community Standard. His involvement in this research lends the VAIR framework significant credibility as a potential building block for international AI governance standards.
David Lewis is Professor of Computer Science and Director of the ADAPT Centre at Trinity College Dublin. He has led numerous European research programs on AI and natural language processing, and serves as an advisor to Irish AI policy development. His leadership brings institutional authority to the research's policy recommendations.
The Core Problem: High-Risk AI Classification Is Not a Checklist
The EU AI Act's Annex III identifies eight domains where AI applications are likely to be classified as high-risk: biometric identification, critical infrastructure, education, employment, access to essential services, law enforcement, migration management, and administration of justice. However, the authors demonstrate that simply matching an application to one of these domains is insufficient—and potentially misleading—for accurate compliance assessment.
The research introduces a systematic semantic analysis of Annex III's definitional clauses, extracting the underlying "core concepts" whose specific combinations trigger high-risk classification. These concepts include the nature of the use context, the category of fundamental rights potentially affected, and the degree to which the AI system influences or automates consequential decisions. This multi-dimensional approach reveals that many enterprises may be misclassifying their AI systems—either over-identifying low-risk applications as high-risk (incurring unnecessary compliance costs) or, more dangerously, failing to recognize genuinely high-risk systems that require mandatory conformity assessments, technical documentation, and post-market monitoring.
Finding 1: Semantic Decomposition Enables Systematic Risk Identification
By decomposing the abstract legal language of Annex III into discrete, combinable semantic concepts, the authors create a structured pathway for organizations to assess their AI applications systematically. This approach transforms what was previously a judgment-dependent process into one that can be standardized, documented, and—critically—audited. For enterprises subject to EU AI Act obligations, this is precisely the kind of structured methodology that regulators will expect to see evidence of in technical documentation and conformity assessments.
Finding 2: VAIR Bridges the Gap Between Regulation and ISO 42001 Implementation
The Vocabulary for AI Risks (VAIR) proposed in this research is designed as an open, machine-readable knowledge base that can be integrated into existing AI audit workflows, risk assessment processes, and documentation systems. Critically, the authors explicitly connect VAIR's design to the harmonised standards framework that the EU AI Act relies upon for compliance and enforcement. They identify a significant gap in current ISO standardization activities—including the development of ISO 42001—in terms of providing sufficiently specific AI risk and impact knowledge bases. VAIR is positioned as the type of supplementary knowledge infrastructure needed to make ISO 42001-based AI management systems genuinely fit for purpose under EU AI Act compliance requirements.
Finding 3: Harmonised Standards Alone Cannot Deliver EU AI Act Compliance
Perhaps the most important finding for enterprise risk officers is the authors' assessment that current harmonised standards—the ISO and IEC standards that the EU AI Act designates as presumptive compliance tools—do not yet provide sufficient specificity for high-risk AI applications. This means that enterprises cannot assume that ISO 42001 certification alone will satisfy EU AI Act requirements for high-risk AI systems. Additional layers of domain-specific risk vocabulary, impact assessment methodology, and technical documentation infrastructure will be necessary. This finding directly informs Winners Consulting's service design for Taiwanese enterprises.
Implications for Taiwanese Enterprises: Act Now, Not After 2026
The EU AI Act entered into force in August 2024, with provisions for high-risk AI systems set to apply from August 2026. Taiwanese enterprises—particularly in technology manufacturing, financial services, medical devices, and human resources management—face a compressed window to establish compliant AI governance frameworks before enforcement begins.
The implications extend beyond EU market access. Taiwan's draft AI Basic Law (人工智慧基本法), promoted by the Executive Yuan in 2024, adopts a risk-tiered governance approach that closely parallels the EU AI Act's architecture. Organizations that build ISO 42001-aligned AI management systems now will find themselves well-positioned for compliance under both frameworks simultaneously. The semantic risk classification methodology proposed in this research is directly applicable to the risk assessment requirements that Taiwan's AI Basic Law is expected to mandate for high-risk AI applications.
Three immediate action priorities emerge from this research for Taiwanese enterprise leaders. First, conduct an AI application inventory and map each system against the EU AI Act's Annex III domains—but do not stop at domain matching; apply the multi-concept combination analysis the authors propose. Second, assess your current documentation infrastructure: does it support the technical documentation, logging, and conformity assessment obligations that high-risk AI systems require under the EU AI Act? Third, evaluate your ISO 42001 implementation roadmap with awareness that ISO compliance alone may not satisfy EU AI Act requirements for high-risk applications—supplementary risk vocabulary and impact assessment processes will be needed.
How Winners Consulting Helps Taiwanese Enterprises Build Audit-Ready AI Governance
Winners Consulting Services Co. Ltd. (積穗科研股份有限公司) assists Taiwanese enterprises in building AI management systems that satisfy the requirements of ISO 42001, the EU AI Act, and Taiwan's AI Basic Law simultaneously. Our approach is grounded in the kind of structured, semantically rigorous risk methodology that this research exemplifies—not checkbox compliance, but genuinely defensible AI governance.
- High-Risk AI Rapid Screening (aligned with VAIR core concept methodology): Using the multi-dimensional concept combination framework proposed in this research, we systematically assess your existing and planned AI applications against EU AI Act Annex III criteria, producing a written risk classification report within 2 to 4 weeks. This gives your legal, compliance, and technology teams a shared, documented basis for governance decisions.
- ISO 42001 AI Management System Implementation (addressing the harmonised standards gap): Recognizing that ISO 42001 certification alone does not fully satisfy EU AI Act high-risk AI requirements, we design supplementary risk vocabulary, impact assessment workflows, and technical documentation systems that close the gap the authors identify. Our implementation methodology ensures that your AI governance documentation will withstand regulatory scrutiny, not just third-party audits.
- Dual-Track Compliance Planning for EU AI Act and Taiwan AI Basic Law (90-day launch program): We provide a comparative analysis of Taiwan's draft AI Basic Law and the EU AI Act, identifying shared requirements and jurisdiction-specific obligations. Within 90 days, we complete a current-state assessment, gap analysis, and initial governance framework design—positioning your organization ahead of both regulatory timelines.
Winners Consulting Services Co. Ltd. offers a complimentary AI Governance Mechanism Diagnostic, helping Taiwanese enterprises establish ISO 42001-aligned AI management systems within 90 days.
Request Your Free Diagnostic →Frequently Asked Questions
- How do I know if my company's AI applications qualify as "high-risk" under the EU AI Act?
- High-risk classification under the EU AI Act is not determined by a single factor. This research demonstrates that classification requires analyzing a structured combination of core concepts: the use context, the fundamental rights at stake, and the degree of automated decision-making authority the system exercises. Start by mapping your AI applications to the eight domains listed in Annex III, then apply a secondary analysis of which fundamental rights each system could affect and to what degree decisions are automated or consequential. Winners Consulting's rapid screening service completes this analysis with a written report in 2 to 4 weeks, giving your team a documented, audit-ready basis for compliance decisions.
- Does EU AI Act apply to Taiwanese companies that don't have offices in Europe?
- Yes. Like the GDPR, the EU AI Act applies on an effects basis: if your AI system's outputs are used by individuals or organizations within the EU, or if your products or services incorporating AI are sold in EU markets, you are within scope. This applies to Taiwanese technology manufacturers whose AI-enabled products enter EU supply chains, SaaS providers with European customers, and medical device companies exporting to EU member states. High-risk AI obligations—including conformity assessment, technical documentation, and post-market monitoring—will apply from August 2026, making immediate preparation essential.
- Is ISO 42001 certification sufficient to demonstrate EU AI Act compliance?
- ISO 42001 provides an excellent governance framework foundation, but this research explicitly identifies that current ISO standardization activities leave significant gaps in AI risk and impact knowledge bases when measured against EU AI Act high-risk AI requirements. ISO 42001 certification demonstrates systematic AI management capability, but does not automatically satisfy the specific technical documentation, conformity assessment, and risk vocabulary requirements for high-risk AI systems under the EU AI Act. The correct approach is to use ISO 42001 as the architectural backbone, supplemented by EU AI Act-specific risk methodology and Taiwan AI Basic Law-aligned governance elements. Winners Consulting designs exactly this three-layer integrated framework for Taiwanese enterprises.
- How long does it take to build a compliant AI governance framework? What are the steps?
- Based on Winners Consulting's implementation experience, a structured four-phase approach typically completes in 90 to 180 days depending on organizational size and AI application complexity. Phase one (days 1 to 30): current-state assessment and AI application inventory. Phase two (days 31 to 60): high-risk classification analysis and gap assessment against ISO 42001, EU AI Act, and Taiwan AI Basic Law. Phase three (days 61 to 120): governance framework design and implementation, including policies, risk assessment processes, and technical documentation systems. Phase four (days 121 to 180): internal validation, staff training, and third-party audit preparation. Most mid-sized Taiwanese enterprises can have core mechanisms operational within 120 days.
- Why engage Winners Consulting Services for AI governance?
- Winners Consulting Services Co. Ltd. (積穗科研股份有限公司) is one of Taiwan's few consulting firms combining ISO management system certification expertise, hands-on AI governance implementation experience, and active regulatory tracking across EU AI Act, ISO 42001, and Taiwan's AI Basic Law. We monitor legislative developments in real time—including the EU AI Act's delegated acts and harmonised standards development—ensuring our clients' governance frameworks remain current. We do not deliver documentation-only compliance; we build governance mechanisms that actually function and can withstand regulatory examination. From initial risk classification through management system
Was this article helpful?
Related Services & Further Reading
Want to apply these insights to your enterprise?
Get a Free Assessment