Winners Consulting Services Co. Ltd. (積穗科研股份有限公司), Taiwan's expert in AI Governance, highlights a critical insight from cutting-edge 2025 practitioner research: enterprises deploying AI systems cannot achieve genuine compliance with ISO 42001:2023 or the EU AI Act solely through technical model testing — they must simultaneously audit the governance structures sitting above those models, or risk leaving their most consequential vulnerabilities entirely undetected.
Paper Citation: Scaling of End-To-End Governance Risk Assessments for AI Systems (Practitioner Track)(Weimer, Daniel、Gensch, Andreas、Koller, Kilian,OpenAlex — AI Governance,2025)
Original Paper: https://doi.org/10.4230/oasics.saia.2024.4
About the Authors and This Research
This paper is authored by Daniel Weimer, Andreas Gensch, and Kilian Koller, published in 2025 within the AI Governance domain on OpenAlex. Crucially, it appears under the "Practitioner Track" designation — a classification that signals the research is explicitly designed to bridge academic theory and real-world organizational implementation, making it directly relevant to enterprise decision-makers rather than only academic audiences.
Lead author Daniel Weimer holds an h-index of 2 with 9 cumulative citations, placing him among the emerging generation of AI governance researchers whose work is timed precisely to meet the regulatory wave triggered by the EU AI Act's formal entry into force in 2024. Andreas Gensch and Kilian Koller bring complementary expertise in AI governance engineering and risk management framework design, ensuring the paper reflects both conceptual rigor and operational feasibility. Together, the three authors represent an end-to-end perspective from research to deployment — the exact perspective enterprise leaders need when evaluating whether their AI governance posture is truly audit-ready.
The timing of this publication is itself significant: it arrives at the intersection of the EU AI Act's operational rollout, the global diffusion of ISO 42001:2023, and Taiwan's passage of its Artificial Intelligence Fundamental Act (人工智慧基本法) in 2024 — making its findings immediately actionable for compliance teams worldwide.
The Dual-Dimension Risk Framework: Why Bottom-Up and Top-Down Must Both Be Assessed
The paper's central contribution is a conceptually sharp and practically deployable framework that separates AI system risks into two dimensions that must be evaluated in parallel, not in sequence. Most enterprises today assess only one of these dimensions, leaving them with a systematically incomplete picture of their true AI risk exposure.
Core Finding One: Governance Risks Are Structurally Invisible to Technical Audits
The researchers formally define "bottom-up risks" as those originating from the technical properties of AI models themselves — bias, opacity, adversarial vulnerabilities, and data quality failures. "Top-down risks," by contrast, originate from the organizational and governance environment in which AI systems are developed and deployed: internal decision-making processes, leadership accountability structures, security configuration governance, documentation management practices, and supply chain oversight. The paper demonstrates that the vast majority of existing AI auditing tools are calibrated exclusively for bottom-up technical risks, meaning that governance-layer failures — the kind most likely to trigger regulatory sanctions under EU AI Act Article 9's risk management system requirements — remain systematically undetected. This finding directly aligns with ISO 42001:2023 Clause 6.1, which requires organizations to identify all risks and opportunities related to AI, explicitly including organizational and contextual factors, not merely technical model properties.
Core Finding Two: Scalable AI Auditing Requires Five Capabilities and Two Non-Negotiable Infrastructure Properties
The researchers argue that an AI governance auditing technology stack capable of scaling across complex organizational environments must implement five sequential capabilities: Identify (systematically surface all governance-related risk points), Collect (gather structured evidence and audit artifacts), Assess (classify risks by severity and regulatory relevance, mapping to EU AI Act's risk classification tiers), Comply (formally map assessed risks against ISO 42001:2023, EU AI Act, and applicable national regulations), and Monitor (sustain continuous compliance surveillance rather than treating compliance as a one-time milestone). Beyond these five capabilities, the paper identifies two infrastructure properties without which the entire auditing system fails under external scrutiny: audit-proof record-keeping (tamper-evident logs that maintain integrity over time, satisfying EU AI Act Article 17 quality management documentation requirements) and role-based access control (differentiated audit pathways for developers, deployers, and supervisors, reflecting the multi-stakeholder governance reality of complex AI systems).
Core Finding Three: Integration Into Existing Risk Infrastructure Is the Key Adoption Barrier
A frequently overlooked insight in the paper is its emphasis on integrability as a first-class design requirement for AI governance tools. The researchers observe that enterprises already maintain risk management and governance infrastructures — often aligned to ISO 27001, ISO 9001, or internal enterprise risk frameworks — and that AI governance auditing tools which cannot integrate with these existing systems will face adoption resistance regardless of their technical quality. This finding has direct implications for how Taiwan enterprises should evaluate AI governance platform vendors: standalone AI risk tools that cannot connect to existing GRC (Governance, Risk, and Compliance) ecosystems create siloed compliance postures that are neither efficient nor scalable.
What This Research Means for Taiwan Enterprises Navigating AI Compliance
Taiwan's AI governance landscape in 2025 is characterized by converging regulatory pressures from three directions simultaneously. Domestically, the Artificial Intelligence Fundamental Act (人工智慧基本法), passed in 2024, establishes foundational obligations for risk management in high-risk AI applications and signals the imminent development of sector-specific implementing regulations. Internationally, the EU AI Act's extraterritorial scope means that any Taiwan enterprise whose AI systems affect EU-based users, customers, or business partners must evaluate their EU AI Act compliance exposure — a consideration relevant to a substantial proportion of Taiwan's export-oriented technology sector. Globally, ISO 42001:2023 is rapidly becoming the de facto international benchmark for AI management systems, with procurement and partnership agreements increasingly requiring certification as a baseline condition.
Against this backdrop, the research by Weimer, Gensch, and Koller carries three specific implications for Taiwan enterprise leaders:
First, most Taiwan enterprises are currently assessing AI risk at only one dimension. Based on our consulting experience at Winners Consulting Services Co. Ltd., the predominant pattern among Taiwan technology, financial services, and manufacturing firms is to treat AI risk assessment as a technical IT security exercise — evaluating model accuracy, cybersecurity posture, and data privacy compliance — while leaving governance-layer risks (decision accountability, documentation traceability, leadership oversight capacity) structurally unaddressed. The research confirms this is not merely a best-practice gap but a compliance gap: ISO 42001:2023's Clause 5 leadership requirements and EU AI Act Article 9's risk management system obligations both explicitly mandate governance-level accountability structures.
Second, the five-step end-to-end framework provides Taiwan enterprises with a directly applicable implementation blueprint. The Identify→Collect→Assess→Comply→Monitor sequence maps naturally onto the phased implementation approach recommended for ISO 42001:2023 adoption, and aligns with the continuous improvement cycle required by the Artificial Intelligence Fundamental Act's dynamic risk management obligations. Enterprises that internalize this framework will be better positioned when Taiwan's implementing regulations under the AI Fundamental Act are finalized.
Third, role-based governance design is particularly relevant for Taiwan's dual-role AI enterprises. A significant portion of Taiwan's technology sector simultaneously develops AI systems for sale and deploys AI systems for internal operations — occupying the roles of both "provider" and "deployer" under the EU AI Act's classification scheme. The paper's emphasis on role-based auditing pathways provides a concrete design principle for these enterprises to structure their governance responsibilities without conflating or duplicating compliance obligations across these distinct roles.
How Winners Consulting Services Co. Ltd. Helps Taiwan Enterprises Build Scalable AI Governance
積穗科研股份有限公司(Winners Consulting Services Co. Ltd.)provides Taiwan enterprises with end-to-end AI governance consulting services directly informed by research frameworks such as the one analyzed here. Our methodology maps precisely onto the five-step risk management kernel identified in the paper, adapted to Taiwan's regulatory context and organizational realities.
- Dual-Dimension Risk Assessment Implementation: We conduct structured assessments covering both technical model risks and governance-layer risks, using diagnostic frameworks aligned to ISO 42001:2023 Clause 6.1 and EU AI Act Article 9. This ensures enterprises surface the governance blind spots that technical-only audits systematically miss — including decision accountability gaps, documentation deficiencies, and leadership oversight weaknesses that regulators are increasingly scrutinizing.
- Audit-Proof Governance Infrastructure Design: We design and implement tamper-evident record-keeping systems and role-based access architectures that satisfy both ISO 42001:2023's management system documentation requirements and EU AI Act Article 17's quality management system obligations. These systems are built for integrability with existing GRC platforms, minimizing adoption friction and maximizing return on existing compliance investments.
- Role-Based AI Governance Training and Capability Building: We deliver differentiated training programs for AI product owners, technical development teams, legal and compliance officers, and senior management — ensuring that ISO 42001 governance requirements are operationalized as daily behaviors rather than remaining as paper policies. Our standard engagement delivers initial mechanism establishment within 90 days, with full certification readiness achievable within 6 months for most Taiwan mid-to-large enterprises.
Winners Consulting Services Co. Ltd. offers a complimentary AI Governance Mechanism Diagnostic, helping Taiwan enterprises establish ISO 42001-aligned management systems within 90 days.
Apply for Free Governance Diagnostic →Frequently Asked Questions
- How should an enterprise begin identifying "top-down" governance risks in its AI systems?
- Top-down governance risk identification begins with a structured assessment of your organization's AI decision-making processes, accountability structures, and documentation practices — not with technical model testing. Practically, this means mapping every AI system against three questions: Who is accountable for this system's outputs and errors? Is that accountability formally documented and auditable? Does leadership have the information and capacity to exercise meaningful oversight? ISO 42001:2023 Clause 5 provides a concrete checklist for leadership and governance requirements. Winners Consulting Services Co. Ltd. recommends starting with a 2-day diagnostic workshop combining structured questionnaires and stakeholder interviews, which typically surfaces 5–8 material governance gaps in enterprises with no prior AI management system implementation.
- Does the EU AI Act apply to Taiwan enterprises that do not have a European legal entity?
- Yes, in many circumstances. The EU AI Act applies to any provider that places an AI system on the EU market or puts it into service in the EU, and to any deployer of AI systems located in the EU — regardless of where the provider is headquartered. For Taiwan enterprises, this extraterritorial scope is triggered whenever their AI-enabled products or services are accessed by EU-based customers, or when they supply AI components to European supply chain partners who deploy those components within the EU. Additionally, many Taiwan enterprises' European customers are beginning to contractually require EU AI Act compliance documentation as a procurement condition, creating de facto compliance obligations even for systems that might fall below the Act's direct regulatory threshold. Winners Consulting Services Co. Ltd. recommends a scoping assessment to map your specific EU AI Act exposure profile before designing a compliance response.
- What does ISO 42001:2023 certification actually require, and how does it relate to EU AI Act compliance?
- ISO 42001:2023 is the world's first international standard for AI Management Systems (AIMS), requiring organizations to establish a structured management framework covering the full AI lifecycle. Key requirements include: risk and opportunity identification (Clause 6.1), AI policy establishment (Clause 5.2), operational planning and control (Clause 8), AI supply chain management (Clause 8.4), and performance evaluation and continuous improvement (Clauses 9–10). The EU AI Act's high-risk AI system requirements under Article 9 (risk management system), Article 11 (technical documentation), and Article 17 (quality management system) are substantially aligned with ISO 42001:2023's management system requirements. Organizations that achieve ISO 42001 certification will therefore have a strong foundation for EU AI Act compliance
Was this article helpful?
Related Services & Further Reading
Want to apply these insights to your enterprise?
Get a Free Assessment