Winners Consulting Services Co. Ltd. (積穗科研股份有限公司), Taiwan's expert in AI Governance, highlights a critical insight for enterprise executives: when the EU AI Act classifies your AI system as "high-risk," passing functional tests is not enough — you must systematically demonstrate trustworthiness across the entire AI lifecycle. A landmark 2024 study published in IEEE Access, already cited 38 times, proposes a seven-dimensional trustworthiness assurance framework that directly maps onto ISO 42001 and EU AI Act compliance requirements, offering Taiwanese enterprises the clearest academic roadmap yet for building robust AI governance systems.
Paper Citation: Trustworthiness Assurance Assessment for High-Risk AI-Based Systems (Georg Stettinger, Patrick Weissensteiner, Siddartha Khastgir, OpenAlex — AI Governance, 2024)
Original Paper: https://doi.org/10.1109/access.2024.3364387
About the Authors and Their Research
The lead author, Georg Stettinger, is affiliated with the Graz University of Technology in Austria, where he specializes in safety engineering for automated and AI-based systems. With an h-index of 12 and 448 cumulative citations, Stettinger is a recognized voice in AI trustworthiness and assurance methodology. Co-author Siddartha Khastgir brings complementary expertise from the WMG research center at the University of Warwick (UK), where he focuses on verification, validation, and certification of autonomous and intelligent systems — with significant international influence in AI testing policy circles. Patrick Weissensteiner, also from Graz University of Technology, rounds out the team with focused expertise in AI compliance methodology.
Published in IEEE Access in 2024 — one of IEEE's flagship peer-reviewed open-access journals — this paper has already accumulated 38 citations, an exceptionally rapid uptake that signals high relevance in the AI governance research community. The authors' combined background spanning automotive safety engineering and AI regulatory compliance gives this work both rigorous technical grounding and direct applicability to the EU AI Act's legislative architecture.
The Seven-Dimensional Trustworthiness Framework: The Most Complete Compliance Roadmap for High-Risk AI
The central research question this paper addresses is deceptively practical: when the EU AI Act demands that high-risk AI systems be "trustworthy," what exactly does that mean, and how should organizations operationalize it throughout the AI lifecycle? The authors' answer is a structured assurance framework built on seven interconnected sub-goals, anchored by a pioneering methodology that adapts the Operational Design Domain (ODD) and Behavior Competency (BC) concepts from the mature field of automated driving to the broader challenge of high-risk AI risk quantification.
Core Finding One: Seven Requirements That Are Mutually Reinforcing, Not a Checklist
The research establishes that the EU AI Act's seven trustworthiness requirements for high-risk AI systems — encompassing use restriction formulation, trustworthiness argumentation, dysfunctional case identification, scenario database utilization, evaluation metric application, lifecycle-wide implementation, and human factors consideration — are not independent compliance boxes to be ticked. They are mutually supportive pillars that must be implemented and evaluated together across the full AI lifecycle. An enterprise that performs rigorous model testing but neglects human factors analysis, or builds comprehensive datasets while failing to define operational boundaries, will not achieve genuine compliance. For Taiwanese enterprises pursuing ISO 42001 certification or EU AI Act conformity, this finding carries a critical practical implication: AI governance is not a one-time documentation exercise but an ongoing management discipline embedded into every phase of AI development and deployment.
Core Finding Two: ODD and BC Concepts Provide Quantifiable Risk Boundaries
The paper's most technically innovative contribution is the systematic transfer of the Operational Design Domain (ODD) framework — originally developed to define the conditions under which an autonomous vehicle can safely operate — into a general-purpose tool for defining and quantifying residual risks in high-risk AI systems. In practice, this means every high-risk AI system must have a clearly defined "operational envelope": the specific contexts, data conditions, user populations, and environmental parameters within which the system is permitted to function. Deployment beyond this ODD constitutes unacceptable residual risk. The companion Behavior Competency (BC) framework then quantifies the system's demonstrated capability within those boundaries. Together, ODD and BC provide auditable, measurable definitions of AI risk that directly serve the risk identification and assessment requirements of ISO 42001 Clauses 6 through 8, and the technical documentation requirements of the EU AI Act's Annex IV.
What This Research Means for Taiwan's AI Governance Landscape
For Taiwanese enterprise executives, this research is not merely an academic reference — it reveals an urgent capability gap against a tightening regulatory timeline.
The EU AI Act entered into force in August 2024, with full compliance requirements for high-risk AI systems becoming mandatory in August 2026. Any Taiwanese company with exposure to EU markets — whether exporting AI-enabled products, delivering AI-related services, or developing AI system components for multinational clients — faces real compliance obligations. At the same time, Taiwan's own AI Basic Law (人工智慧基本法) draft legislation is advancing through the legislative process, with a risk classification and governance framework that closely mirrors the EU AI Act's structure. This regulatory convergence signals that even enterprises focused purely on the Taiwan domestic market will face tightening AI governance requirements in the near term.
ISO 42001, the International Standard for Artificial Intelligence Management Systems published in 2023, represents the most important governance certification framework available to enterprises today. Its structural requirements — systematic risk identification, lifecycle management, human oversight provisions, and continuous monitoring — align closely with the seven-dimensional framework proposed in this paper. Achieving ISO 42001 certification not only demonstrates AI governance maturity to customers, partners, and regulators, but also provides concrete evidence of systematic management processes that can substantially reduce the compliance burden under EU AI Act audits.
Crucially, the paper identifies remaining gaps in existing standards, including ISO/IEC 42001, particularly for high-risk AI trustworthiness assurance. This finding underscores that enterprises must build cross-standard, lifecycle-spanning governance capabilities — not simply satisfy the minimum requirements of a single certification. This integrative capacity is precisely what most Taiwanese enterprises currently lack.
How Winners Consulting Services Helps Taiwanese Enterprises Build World-Class AI Governance
積穗科研股份有限公司 (Winners Consulting Services Co. Ltd.) helps Taiwanese enterprises build AI management systems that satisfy both ISO 42001 and EU AI Act requirements, conducts systematic AI risk classification assessments, and ensures AI deployments comply with Taiwan's emerging AI Basic Law framework. Based on this paper's research findings, we recommend the following concrete actions for Taiwanese enterprise executives:
- Conduct an immediate AI inventory and risk classification: Map all current and planned AI applications against the EU AI Act's high-risk categories (Annex III). For each potentially high-risk system, define its Operational Design Domain — specifying precisely which business contexts, data conditions, and user populations the system is permitted to serve, and what happens when requests fall outside those boundaries. This ODD definition is the foundation of the entire trustworthiness assurance framework and the starting point for ISO 42001 risk assessment under Clause 6.
- Conduct an ISO 42001 gap analysis against current AI development practices: ISO 42001 Clauses 6 through 10 require enterprises to establish systematic risk identification, assessment, treatment, and monitoring processes across the full AI lifecycle. A structured gap analysis against existing development and deployment workflows will identify priority improvement areas and enable targeted remediation planning. Winners Consulting provides standardized gap analysis tools and advisory support to help enterprises complete an initial assessment within 90 days.
- Build a lifecycle-spanning trustworthiness documentation architecture: The paper's most actionable practical insight is that AI compliance documentation cannot be prepared retroactively before product launch — it must be generated continuously from the system design phase onward. Enterprises should establish a documentation framework covering requirements definition, data governance, model training, testing and validation, deployment monitoring, and retirement procedures, ensuring auditable trustworthiness argumentation records at every lifecycle stage. This is simultaneously a prerequisite for ISO 42001 certification and the core preparation needed for EU AI Act conformity assessment.
Winners Consulting Services Co. Ltd. offers a complimentary AI Governance Mechanism Diagnostic, helping Taiwanese enterprises establish an ISO 42001-compliant management system within 90 days.
Request Your Free Diagnostic →Frequently Asked Questions
- How do I know if my company's AI systems are classified as "high-risk" under the EU AI Act?
- High-risk AI systems are those listed in Annex III of the EU AI Act, including AI used for recruitment and HR screening, credit scoring, medical diagnosis, educational assessment, critical infrastructure management, and law enforcement applications. The most important point Taiwanese enterprises often miss is that even if the system is developed in Taiwan, if the end user or deployment context involves the EU market, EU AI Act requirements apply. The ODD concept from this paper provides a useful self-assessment heuristic: if you cannot clearly define your system's safe operational boundaries — the specific contexts, data types, and user populations where it is designed to work reliably — your system likely lacks the foundational documentation needed for high-risk AI compliance. We recommend an immediate AI risk classification exercise to determine your compliance obligations.
- Does Taiwan's AI Basic Law impose the same requirements as the EU AI Act?
- Taiwan's AI Basic Law (人工智慧基本法) is currently in the legislative drafting stage as of 2024-2025, and its framework incorporates a risk-tiered governance approach that is structurally aligned with the EU AI Act. While the specific compliance obligations and enforcement mechanisms will be determined by subsequent implementing regulations, enterprises that build governance mechanisms to ISO 42001 and EU AI Act standards will be well-positioned to meet Taiwan's emerging domestic requirements with minimal additional adaptation. Given the regulatory convergence trend, building to international standards now is the most efficient long-term compliance strategy for Taiwanese enterprises serving both domestic and international markets.
- What specific benefits does ISO 42001 certification provide for EU AI Act compliance?
- ISO 42001, published in 2023, is the first international standard for AI Management Systems and is structurally aligned with both the EU AI Act's governance requirements and the seven-dimensional trustworthiness framework proposed in this paper. Certification delivers three concrete compliance benefits: first, it provides auditable evidence of systematic risk management processes that substantially satisfies the EU AI Act's requirements for quality management systems (Article 17) and risk management systems (Article 9); second, it demonstrates AI governance maturity to enterprise customers, supply chain partners, and investors; third, through its systematic risk identification and control processes, it reduces the operational risk of AI system failures or misuse. Winners Consulting can support enterprises through a complete ISO 42001 certification preparation process in 6 to 12 months.
- How long does it take to build an AI governance system, and what are the key steps?
- Based on Winners Consulting's advisory experience with Taiwanese medium and large enterprises
Was this article helpful?
Related Services & Further Reading
Want to apply these insights to your enterprise?
Get a Free Assessment