Winners Consulting Services Co. Ltd. (積穗科研股份有限公司), Taiwan's expert in AI Governance, draws urgent attention to a landmark 2025 academic review that exposes a structural paradox at the heart of global AI compliance: the very frameworks designed to make AI trustworthy—including the EU AI Act (Regulation (EU) 2024/1689), ISO/IEC 42001, the NIST AI Risk Management Framework, and the OECD AI Principles—systematically disadvantage the organizations that most need governance support. Small and medium enterprises, municipalities, and public authorities face disproportionate compliance burdens that were never calibrated to their actual resource capacities.
Paper Citation: Gaps in AI-Compliant Complementary Governance Frameworks' Suitability (for Low-Capacity Actors), and Structural Asymmetries (in the Compliance Ecosystem)—A Review (W. Holmes Finch, Marya Butt, OpenAlex — AI Governance, 2025)
Original Paper: https://doi.org/10.20944/preprints202509.1979.v1
About the Authors and This Research
This paper is co-authored by W. Holmes Finch and Marya Butt, published in 2025 through the OpenAlex AI Governance platform. Marya Butt carries an h-index of 2 with 28 cumulative citations, specializing in AI regulatory policy and institutional feasibility analysis—particularly the power asymmetries embedded within compliance ecosystems. While these citation metrics reflect an emerging scholarly profile rather than an established heavyweight, the paper's value lies precisely in its critical realism. Rather than endorsing prevailing frameworks, the authors conduct a structured literature review spanning regulatory, ethical, and governance sources to systematically identify how four of the world's most widely cited AI governance frameworks fail their intended beneficiaries when applied to organizations without the technical and financial infrastructure that the frameworks implicitly presuppose.
The research covers EU AI Act (enacted 13 March 2024 as Regulation (EU) 2024/1689), the Assessment List for Trustworthy AI (ALTAI) as a soft-law instrument, ISO/IEC 42001 as an auditable management system standard, the NIST AI Risk Management Framework, and the OECD AI Principles. By mapping role-specific obligations—Provider, Deployer, Importer, Distributor—against the structural capacities of low-resource actors, the paper contributes a structural critique that Taiwanese enterprise leaders urgently need to internalize.
Core Finding: Four Frameworks, One Shared Blind Spot Toward Low-Capacity Actors
The central argument of this paper is one that should concern every Taiwanese enterprise leader planning AI compliance investment: current AI governance frameworks are architecturally biased toward technologically advanced, resource-rich providers, and this bias is not accidental—it is structural. The compliance obligations, documentation requirements, and auditability standards embedded in these frameworks were designed with large AI developers in mind, and the proportionality provisions intended to accommodate smaller actors are insufficient to bridge the gap.
Finding 1: The EU AI Act's Risk-Tiered Architecture Creates Auditability Gaps for Deployers
The EU AI Act establishes four risk tiers: unacceptable risk (prohibited), high risk (Article 9 risk management, Article 11 technical documentation, Article 13 transparency, Article 17 quality management system), limited risk (transparency obligations), and minimal risk (voluntary codes of conduct). The Act distributes obligations across the AI value chain, but the research reveals that high-risk deployers—organizations using AI systems built by third-party providers—face documentation and auditability requirements that assume internal technical capacity their organizations typically do not possess. A municipality deploying an AI-based permit evaluation system, for instance, must maintain risk management documentation and ensure human oversight mechanisms, yet has no dedicated AI compliance team, no legal department specialized in AI regulation, and no budget for third-party audit preparation. This is the auditability gap: the obligation exists in law, but the institutional infrastructure to fulfill it does not exist in practice for the majority of deploying organizations.
Finding 2: ALTAI's Normative Value Lacks an Operational Bridge to ISO 42001
ALTAI—the Assessment List for Trustworthy AI developed by the EU High-Level Expert Group on AI—represents the most sophisticated soft-law ethical framework currently available. Its seven dimensions (Human Agency and Oversight, Technical Robustness and Safety, Privacy and Data Governance, Transparency, Diversity and Fairness, Societal and Environmental Wellbeing, and Accountability) provide a comprehensive normative scaffold. However, the paper identifies a critical methodological gap: ALTAI was designed as a self-assessment tool, not as an audit-ready compliance protocol. When organizations attempt to translate ALTAI principles into the documented controls, procedures, and evidence trails required for ISO 42001 certification, they encounter a translation gap that no existing guidance document adequately bridges. This means that organizations which diligently complete ALTAI self-assessments are still not producing the auditable artifacts that ISO 42001 third-party certification requires. Winners Consulting Services Co. Ltd. has developed a proprietary mapping methodology that systematically connects ALTAI's seven dimensions to ISO 42001 control requirements, directly addressing this gap.
Finding 3: Compliance Ecosystem Asymmetries Reinforce Market Concentration
Perhaps the most consequential finding for Taiwanese businesses is the paper's structural observation about market dynamics. When compliance costs—technical documentation, conformity assessment, third-party audit, post-market monitoring—are calibrated to the operational capacity of large AI providers, the compliance ecosystem naturally consolidates around organizations that can absorb these costs. Smaller organizations face a binary choice: over-invest in compliance infrastructure relative to their size, or accept market exclusion from regulated domains. For Taiwan, where over 97% of enterprises are classified as SMEs according to the Small and Medium Enterprise Administration, this structural asymmetry is not a theoretical concern—it is an imminent competitive threat. Taiwanese AI solution providers exporting to European markets, or deploying systems affecting EU-based users, face the full weight of EU AI Act obligations with a fraction of the compliance infrastructure that their European counterparts possess.
Implications for Taiwan's AI Governance Practice: Three Levels of Urgency
The research findings translate into three concrete levels of urgency for Taiwanese enterprise leaders, each requiring different timelines and investment priorities.
Level 1: Dual-Track Compliance Exposure. Taiwan's AI Basic Law (人工智慧基本法) is advancing through legislative development with a risk-based regulatory architecture explicitly modeled on EU AI Act principles. Taiwanese enterprises therefore face simultaneous compliance obligations under two converging frameworks. The structural asymmetries identified in this paper will replicate in Taiwan's domestic compliance ecosystem just as they have in Europe. ISO/IEC 42001 certification serves as the most efficient bridging mechanism—it provides a single auditable management system framework that simultaneously satisfies EU AI Act Article 40's harmonized standard presumption and the risk management requirements anticipated in Taiwan's AI Basic Law.
Level 2: Risk Classification Readiness. EU AI Act Annex III enumerates eight high-risk AI application domains, including: biometric identification, critical infrastructure management, educational and vocational training, employment and worker management, access to essential services (including credit assessment), law enforcement, migration and border control, and administration of justice. Taiwanese enterprises in financial technology, human resources technology, logistics, and manufacturing should conduct immediate AI system inventories to determine whether their core AI applications fall within these high-risk classifications. The paper's finding that compliance obligations are heaviest precisely where organizational capacity is most limited makes proactive risk classification—rather than reactive compliance—the economically rational strategy.
Level 3: Auditability Infrastructure Development. The gap between governance intent and auditable evidence is Taiwan's most pervasive AI governance challenge. Many Taiwanese enterprises have developed meaningful AI governance practices—AI ethics principles, informal oversight mechanisms, responsible use guidelines—but have not translated these practices into the documented procedures, decision records, risk assessment trails, and performance monitoring data that ISO 42001 certification and EU AI Act compliance require. Building auditability infrastructure is not about replacing existing practices; it is about making those practices visible, verifiable, and defensible to external scrutiny.
How Winners Consulting Services Co. Ltd. Helps Taiwanese Enterprises Overcome Structural Compliance Barriers
Winners Consulting Services Co. Ltd. (積穗科研股份有限公司) provides end-to-end AI governance consulting services specifically designed to address the structural compliance challenges identified in this research. Our approach translates academic findings into executable enterprise governance programs.
- Lightweight Compliance Architecture Design: We design ISO 42001-compliant AI management systems scaled to your organization's actual size and resource capacity—not to the implicit large-enterprise assumptions embedded in the standard's generic requirements. By systematically mapping ALTAI's seven normative dimensions to ISO 42001 control requirements, we bridge the methodological gap identified in this paper and produce documentation that satisfies both soft-law ethics assessments and hard-law audit requirements.
- Dual-Track Risk Classification and Gap Analysis: We conduct structured AI system inventories against EU AI Act Annex III high-risk classifications and Taiwan AI Basic Law risk management requirements, producing a single integrated risk register that serves both compliance frameworks. This eliminates the redundant documentation burden that compounds compliance costs for resource-limited organizations.
- 90-Day Audit Readiness Acceleration: Our flagship program compresses ISO 42001 certification preparation into 90 days through phased execution: Month 1 covers current-state assessment and gap analysis; Month 2 focuses on documentation architecture and control implementation; Month 3 delivers internal audit execution, management review, and mock certification audit. For enterprises with existing ISO 27001 or ISO 9001 foundations, the integration pathway reduces preparation time by approximately 30%.
Winners Consulting Services Co. Ltd. offers a complimentary AI Governance Mechanism Diagnostic, helping Taiwanese enterprises establish ISO 42001-compliant management systems within 90 days.
Apply for Free Governance Diagnostic →Frequently Asked Questions
- Does the EU AI Act actually apply to Taiwanese companies that don't have a European subsidiary?
- Yes—territorial scope
Was this article helpful?
Related Services & Further Reading
Want to apply these insights to your enterprise?
Get a Free Assessment