Winners Consulting Services Co. Ltd. (積穗科研股份有限公司), Taiwan's expert in AI Governance, highlights a critical finding from cutting-edge 2025 research: the ethical quality of corporate AI is not a function of technical sophistication—it is a direct product of legal framework design. For Taiwan enterprises navigating simultaneous pressure from the EU AI Act, ISO 42001 certification requirements, and Taiwan's forthcoming AI Basic Act, this research delivers a precise and actionable diagnosis of where corporate AI governance systematically fails and what it takes to fix it.
Paper Citation: The Role of Legal Frameworks in Shaping Ethical Artificial Intelligence Use in Corporate Governance(Shahmar Mirishli,arXiv — AI Governance & Ethics,2025)
Original Paper: http://arxiv.org/abs/2503.14540v1
About the Author and This Research
Shahmar Mirishli is an emerging scholar working at the intersection of AI governance and legal frameworks, publishing in the arXiv AI Governance & Ethics category. With 16 cumulative citations and an h-index of 2, Mirishli represents the new generation of cross-disciplinary researchers whose work bridges legal theory, technology policy, and corporate governance practice. While these citation metrics reflect an early-career researcher, the publication timing of this 2025 paper is strategically significant: it arrives precisely as the EU AI Act entered its enforcement phase in 2024, ISO 42001 was released in December 2023, and Taiwan's AI Basic Act draft moved into legislative deliberation.
Mirishli's methodology draws on legal analysis, industry standards review, and multi-disciplinary academic synthesis—a combination that produces the kind of integrated governance framework that senior executives and board directors can directly apply to strategic decision-making. The value of this research to Taiwan enterprises lies not in citation volume but in the systematic clarity with which it maps the current global regulatory landscape onto actionable corporate governance imperatives.
Three Core Findings: How Legal Frameworks Determine Corporate AI Governance Success
The paper's central thesis is both elegant and urgent: the governance quality of AI applications within enterprises is structurally determined by the quality of legal frameworks surrounding them. Through analysis of recent legislative initiatives, industry standards including ISO 42001, and academic perspectives, the research arrives at three findings with direct board-level implications.
Core Finding 1: Principle-Based Regulation Outperforms Rule-Based Regulation—But Only When Paired with Sector-Specific Guidance
The research demonstrates that highly prescriptive, rule-based regulatory approaches consistently fail to keep pace with AI's rapid capability evolution. By the time detailed rules complete the legislative process, the technology has often advanced beyond the regulatory scope. Principle-based regulation—which focuses on outcomes such as transparency, accountability, and fairness rather than specific technical requirements—proves more durable and adaptive. However, Mirishli's analysis makes a critical qualification: principles without sector-specific implementation guidance leave enterprises in a state of "correct principles, uncertain execution." This finding directly validates the design philosophy of ISO 42001, which provides a principles-based management system framework that enterprises must contextualize to their specific industry and risk profile. It also explains why the EU AI Act's risk-tier architecture (unacceptable risk, high risk, limited risk, and minimal risk) was structured the way it was—to combine principled risk governance with practical operational requirements.
Core Finding 2: Transparency, Accountability, and Fairness Are the Three Structural Failure Points of Corporate AI Governance
The paper's systematic evaluation of current enterprise AI governance reveals that transparency, accountability, and fairness are not abstract ethical ideals—they are the three specific dimensions where corporate AI governance most consistently and most consequentially breaks down. Transparency failures prevent stakeholders from understanding how AI decisions are made, undermining both regulatory compliance and stakeholder trust. Accountability gaps mean that when AI systems produce errors—in hiring, credit assessment, medical diagnostics, or supply chain decisions—no clear mechanism exists to identify responsibility and drive correction. Fairness failures, driven by training data bias, produce systemic discrimination in precisely the high-stakes applications where AI is being deployed most aggressively. Each of these three dimensions maps directly onto specific articles and requirements within the EU AI Act (Article 13 on transparency obligations and Article 17 on quality management systems) and ISO 42001 Chapter 6 on risk and opportunity management.
Core Finding 3: Current Frameworks Are Limited by Adaptability Gaps and Cross-Border Coordination Deficits
Even the most advanced current AI legal frameworks face two systemic limitations that enterprises must anticipate rather than merely react to. First, regulatory update cycles lag behind AI capability advancement, creating persistent regulatory gaps where enterprise AI applications operate in zones of legal ambiguity. Second, multinational enterprises—and Taiwan's export-oriented enterprises engaging with EU, US, and regional markets simultaneously—face conflicting compliance requirements across jurisdictions with no effective international coordination mechanism in place. This finding is acutely relevant to Taiwan, where enterprises must simultaneously navigate Taiwan's AI Basic Act principles, EU AI Act requirements for EU market access, and supply chain partners' own compliance expectations.
What This Research Means for Taiwan AI Governance Practice
Taiwan enterprises are at a governance inflection point that this research illuminates with unusual precision. Three regulatory frameworks are converging simultaneously in 2025: the EU AI Act, which entered its enforcement phase on August 1, 2024 and applies extraterritorially to Taiwan enterprises serving EU markets; ISO 42001, released December 2023, which is increasingly appearing as a supplier qualification requirement in global procurement; and Taiwan's AI Basic Act draft, which establishes risk-tiered management and "human-centered AI" as foundational governance principles.
First Implication: AI Risk Classification Is Now a Legal Obligation, Not a Strategic Option. The EU AI Act's four-tier risk classification—unacceptable, high, limited, and minimal risk—creates binding compliance obligations for high-risk AI applications including those used in employment, credit, critical infrastructure, and law enforcement. Taiwan's AI Basic Act adopts the same risk-tiered management principle. Every Taiwan enterprise must immediately conduct an AI application inventory to determine which applications fall into regulated risk categories. This is the non-negotiable starting point for all subsequent compliance action.
Second Implication: ISO 42001 Is the Most Operationally Viable Bridge Between Enterprise Governance and Legal Framework Compliance. Mirishli's finding that principle-based regulation works best when paired with sector-specific guidance perfectly describes what ISO 42001 provides: a principles-based AI management system framework that enterprises adapt to their specific context. For Taiwan's SMEs that cannot maintain dedicated EU AI Act legal teams, ISO 42001 certification provides the most accessible and internationally recognized path to demonstrable compliance. The standard is architecturally compatible with ISO 27001 (information security) and ISO 9001 (quality management), reducing implementation burden for enterprises with existing ISO foundations.
Third Implication: Transparency and Accountability Must Be Board-Level Governance Commitments. The research's identification of transparency and accountability as structural failure points—not technical shortcomings—means that AI governance cannot be delegated entirely to IT or compliance departments. Board-level risk oversight must include AI governance as a standing agenda item. Enterprises need cross-functional AI governance committees, documented AI decision logs, and clear accountability matrices that specify responsibility ownership for each high-risk AI application. ISO 42001 Chapter 5 explicitly requires top management to take ownership of AI governance responsibility—a requirement that aligns with the direction Taiwan's AI Basic Act is heading.
How Winners Consulting Services Helps Taiwan Enterprises Build Compliant AI Governance
Winners Consulting Services Co. Ltd. (積穗科研股份有限公司) provides end-to-end AI governance consulting that directly addresses the structural governance gaps identified in Mirishli's research. Our service framework guides Taiwan enterprises from initial diagnosis through ISO 42001 certification readiness, with specific expertise in EU AI Act cross-border compliance strategy and Taiwan AI Basic Act alignment.
- AI Application Risk Inventory and Classification: Using the EU AI Act's four-tier risk framework and Taiwan AI Basic Act risk management principles as dual reference points, we conduct comprehensive audits of existing and planned AI applications. The output is a prioritized AI risk register with clear compliance action mapping—the essential foundation that Mirishli's research identifies as the starting point for any effective AI governance regime.
- ISO 42001 AI Management System Design and Implementation: We design AI management systems built on ISO 42001's principles-based architecture, specifically engineered to close the transparency, accountability, and fairness gaps that the research identifies as the three structural failure points of corporate AI governance. Our implementation methodology includes AI transparency policy development, accountability matrix construction, fairness review processes, and continuous monitoring mechanisms.
- Cross-Border AI Compliance Strategy: For Taiwan's export-oriented enterprises simultaneously subject to Taiwan's AI Basic Act, EU AI Act requirements for EU market access, and supply chain compliance expectations, we develop integrated multi-framework compliance strategies that eliminate redundancy, minimize cost, and build governance architectures that can scale with evolving regulatory requirements across jurisdictions.
Winners Consulting Services Co. Ltd. offers a complimentary AI Governance Diagnostic, helping Taiwan enterprises establish an ISO 42001-compliant management system within 90 days.
Apply for Free Governance Diagnostic →Frequently Asked Questions
- What are the most commonly overlooked legal risks in corporate AI governance?
- The most consistently overlooked risk is accountability chain failure—when an AI system produces an erroneous decision, most enterprises cannot clearly identify who bears responsibility for that decision. Mirishli's research identifies accountability deficit as the single most prevalent structural problem in corporate AI governance. EU AI Act Article 17 requires operators of high-risk AI systems to establish quality management systems with explicit documentation of decision processes and responsibility ownership. ISO 42001 Chapter 5 requires top management to formally own AI governance accountability. Taiwan's AI Basic Act draft similarly emphasizes explicit responsibility attribution. Enterprises should immediately establish AI decision logs and accountability matrices that map clear responsibility ownership to each high-risk AI application, ensuring that when errors occur, correction mechanisms are already in place.
- How do Taiwan enterprises determine whether the EU AI Act applies to them?
- The EU AI Act applies extraterritorially based on market impact rather than geographic location of the enterprise. Taiwan enterprises are subject to EU AI Act requirements if any of the following apply: they provide AI-powered services or products to EU-based customers; they serve as suppliers to EU enterprises providing AI-containing components or services used in EU markets; they sell equipment or software with AI functionality in EU markets. The practical test is whether EU residents are affected by the AI system's outputs, regardless of where the AI is developed or operated. Taiwan enterprises with any EU market exposure should conduct an EU AI Act applicability self-assessment as an immediate priority, classifying each AI application by risk tier and establishing compliance timelines accordingly.
- What is the practical value of ISO 42001 certification for Taiwan enterprises?
- ISO 42001, published in December 2023, is the world's first international standard specifically addressing AI management systems. For Taiwan enterprises, certification delivers value at three levels. First, ISO 42001's framework is architecturally aligned with EU AI Act technical requirements, meaning certification establishes the governance foundation for EU AI Act compliance—addressing the principle-based regulation plus sector-specific guidance combination that Mirishli's research identifies as the most effective regulatory approach. Second, certification provides supply chain stakeholders and customers with internationally recognized evidence that the enterprise manages AI risks systematically, strengthening commercial competitiveness in global markets where AI governance scrutiny is intensifying. Third, ISO 42001 aligns with the responsible AI principles established in Taiwan's AI Basic Act, building regulatory goodwill in an environment of tightening domestic AI oversight.
- How long does it take to build an ISO 42001-compliant AI governance system?
- Based on Winners Consulting Services' implementation experience, the typical timeline follows four phases: Phase 1 (Weeks 1–4): Current state diagnosis and gap analysis against ISO 42001 requirements, producing
Was this article helpful?
Related Services & Further Reading
Want to apply these insights to your enterprise?
Get a Free Assessment