Winners Consulting Services Co. Ltd. (積穗科研股份有限公司), Taiwan's expert in AI Governance, highlights a critical warning for Taiwanese business leaders: as the global AI regulatory landscape fractures into three incompatible blocs—the United States, the European Union, and China—enterprises that fail to build interoperable, standards-aligned AI governance frameworks by 2025 risk being locked out of major markets simultaneously. A landmark 2025 research paper by Satyadhar Joshi, already cited 5 times since publication, delivers the most comprehensive policy roadmap yet for navigating this fragmentation, synthesizing ISO 42001, the EU AI Act, NIST AI RMF, and emerging interoperability standards into a single actionable framework that directly informs what Taiwanese companies must do right now.
Paper Citation: Securing U.S. AI Leadership: A policy guide for regulation, standards and interoperability frameworks (Satyadhar Joshi, OpenAlex — AI Governance, 2025)
Original Paper: https://doi.org/10.30574/ijsra.2025.16.3.2519
About the Author and This Research
Satyadhar Joshi is an emerging voice in AI governance and technology policy research, with a specialization in the intersection of technical interoperability standards and cross-border regulatory frameworks. Published in 2025, this paper has accumulated 5 academic citations within months of publication—a strong indicator of its immediate relevance to the rapidly evolving global AI governance debate. The research draws on an unusually broad evidence base: industry whitepapers, government policy documents, and peer-reviewed academic literature are integrated to construct a panoramic view of where AI governance is heading globally.
What distinguishes Joshi's work is its refusal to treat technical and regulatory challenges as separate domains. The paper explicitly examines how organizations such as ISO/IEC JTC 1/SC 42, IEEE, and NIST are converging on shared standards—most notably ISO/IEC 42001 for AI management systems and the NIST AI Risk Management Framework (AI RMF)—while simultaneously tracking how the EU's risk-based AI Act, America's sector-by-sector strategy, and China's state-led standardization approach are diverging in ways that create structural barriers for globally operating enterprises. For Taiwanese companies that serve both American and European clients, this dual-track analysis is invaluable.
Global AI Regulatory Fragmentation Is Creating Market Access Barriers: Here Is What the Research Found
The central thesis of Joshi's research is that the global AI development ecosystem is undergoing severe fragmentation, and this fragmentation is not merely a matter of different legal texts—it reflects fundamentally divergent governance philosophies. The practical consequence for any enterprise deploying AI systems across borders is that compliance costs are rising steeply, and there is no single universal framework that satisfies all major regulatory environments simultaneously. The research then proceeds to identify the specific mechanisms driving this fragmentation and proposes concrete pathways to navigate it.
Core Finding 1: The EU AI Act's Risk-Based Classification Is Becoming a De Facto Global Market Access Requirement
Joshi's analysis places the EU AI Act's four-tier risk classification system—Unacceptable Risk (prohibited), High Risk (mandatory compliance), Limited Risk (transparency obligations), and Minimal Risk (no specific requirements)—at the center of the emerging global AI compliance landscape. High-risk categories include AI applications in healthcare, critical infrastructure, recruitment, and credit scoring, all of which require extensive documentation, conformity assessments, and ongoing monitoring before deployment in EU markets. The research makes clear that ISO/IEC 42001 is emerging as the primary management framework that bridges internal corporate governance with the external requirements of the EU AI Act. For Taiwanese technology exporters and service providers with European clients, this means that ISO 42001 alignment is no longer a voluntary best practice—it is rapidly becoming a procurement prerequisite.
Core Finding 2: ISO/IEC 42001 and NIST AI RMF Are Converging as the Common Language of Global AI Governance
At the technical standards level, the research synthesizes the latest developments from ISO/IEC JTC 1/SC 42, IEEE, and NIST to argue that ISO/IEC 42001 and the NIST AI RMF are achieving de facto standard status as the baseline governance frameworks for AI management systems. The paper specifically highlights Model Cards (structured documentation of AI model characteristics, limitations, and intended use) and standardized data specifications as practical tools for achieving technical interoperability across organizational and national boundaries. Critically, Joshi projects that international cooperation on AI standardization protocols will accelerate significantly over the next five years, meaning enterprises that invest now in standards-compatible governance documentation will face far lower adaptation costs than those that delay.
Core Finding 3: Interoperability Barriers Operate Simultaneously at Technical and Regulatory Levels, Compounding Each Other
One of the most strategically important findings of the research is that AI interoperability challenges are not purely technical. Data format inconsistencies, divergent model architectures, workflow orchestration complexity, and multi-agent framework integration difficulties constitute the technical barrier layer. Meanwhile, the fundamental philosophical divergence among the EU (preventive regulation), the United States (sector-specific strategies), and China (state-led standards) constitutes the regulatory barrier layer. These two barrier layers reinforce each other: technical fragmentation makes regulatory harmonization harder, and regulatory divergence discourages investment in common technical standards. Enterprises that address only one layer while ignoring the other will find their AI governance frameworks brittle and expensive to maintain.
Core Finding 4: A Forward-Looking Five-Year Scenario Points to Accelerating Standardization—And Growing Penalties for Late Movers
Joshi's research includes scenario projections for the next five years that carry direct strategic implications for Taiwanese business leaders. The research anticipates that international cooperation on standardization will deepen substantially, with ISO/IEC 42001 and aligned frameworks becoming baseline requirements in an expanding set of industries and jurisdictions. Enterprises that establish standards-compatible AI governance architectures now will benefit from compounding advantages—lower future compliance costs, stronger client trust, and faster market entry. Conversely, organizations that defer AI governance investment face a scenario where retrofit costs in three to five years will substantially exceed today's proactive investment costs.
Implications for Taiwan AI Governance Practice: The Compliance Window Is Closing
The most urgent message from Joshi's research for Taiwanese enterprises is that the window for proactive AI governance investment is narrowing rapidly. Taiwan enacted its AI Basic Act (人工智慧基本法) in 2024, establishing foundational principles covering accountability, risk management, and fundamental rights protection in AI applications. However, enterprise-level implementation across Taiwanese industry remains broadly underdeveloped. Simultaneously, the EU AI Act's major obligation provisions began phased enforcement in 2024, and ISO 42001 certification is increasingly appearing as a vendor qualification requirement in European and North American procurement processes.
Taiwanese companies face three simultaneous compliance pressures that Joshi's framework helps to systematize. First, Taiwan's AI Basic Act requires AI systems to incorporate accountability and risk management mechanisms that align closely with ISO 42001's requirements—establishing a domestic regulatory foundation that rewards early ISO 42001 adopters. Second, the EU AI Act's extraterritorial reach means that any Taiwanese enterprise whose AI systems are used by EU-based clients or affect EU residents must comply regardless of where the company is headquartered—making EU AI Act risk classification an immediate operational concern, not a distant regulatory risk. Third, the five-year standardization acceleration scenario Joshi projects means that AI governance architectures built without cross-framework compatibility will require expensive redesign within the same planning horizon that most Taiwanese enterprises use for technology investment decisions.
The research's emphasis on risk-based classification aligns directly with the spirit of Taiwan's AI Basic Act, which similarly distinguishes between different levels of AI application risk requiring different governance intensities. This creates a clear integration opportunity: Taiwanese enterprises can build a single ISO 42001-centered management architecture that simultaneously satisfies domestic AI Basic Act requirements, EU AI Act high-risk compliance obligations, and international standard expectations—eliminating the resource waste of maintaining three separate compliance tracks.
How Winners Consulting Services Co. Ltd. Helps Taiwanese Enterprises Build Future-Ready AI Governance
Winners Consulting Services Co. Ltd. (積穗科研股份有限公司) assists Taiwanese enterprises in building AI management systems that comply with ISO 42001 and EU AI Act requirements, conducting AI risk classification assessments, and ensuring AI applications meet Taiwan's AI Basic Act standards while maintaining the cross-border interoperability that Joshi's research identifies as the defining competitive differentiator of the next five years.
- Conduct an AI Risk Classification Audit: Drawing directly on EU AI Act's four-tier risk framework and Taiwan's AI Basic Act accountability requirements, systematically inventory all existing AI applications, identify high-risk deployment scenarios, and establish prioritized compliance documentation and control measures. This audit is the foundation that prevents enterprises from unknowingly deploying EU-regulated high-risk AI without proper conformity documentation, which could result in market access disqualification or regulatory penalties.
- Implement ISO 42001 as the Unified Governance Architecture: Using ISO/IEC 42001 as the core management framework, establish AI policy documentation, role and responsibility matrices, risk assessment procedures, Model Card documentation templates, and NIST AI RMF-compatible monitoring metrics. This single-architecture approach simultaneously addresses Taiwan's AI Basic Act accountability requirements, EU AI Act high-risk compliance documentation needs, and international supply chain qualification standards—eliminating the redundancy and inconsistency of parallel compliance systems.
- Build Interoperability-Ready Governance Documentation: Based on Joshi's identified trajectory toward accelerating standardization through ISO/IEC JTC 1/SC 42 and IEEE, establish standards-compatible AI data specification documents, structured Model Cards, and cross-border deployment compliance review procedures. This positions the enterprise to absorb the next five years of standardization evolution at minimal additional cost, converting a governance investment made today into a compounding competitive advantage.
Winners Consulting Services Co. Ltd. offers a complimentary AI Governance Mechanism Diagnostic, helping Taiwanese enterprises establish an ISO 42001-compliant management framework within 90 days.
Apply for Free Mechanism Diagnostic →Frequently Asked Questions
- How should a Taiwanese enterprise determine which EU AI Act risk tier its AI applications fall into?
- The EU AI Act classifies AI applications into four tiers: Unacceptable Risk (prohibited outright, e.g., social scoring systems), High Risk (mandatory compliance including healthcare AI, recruitment tools, credit scoring systems, and critical infrastructure management), Limited Risk (transparency obligations such as informing users they are interacting with a chatbot), and Minimal Risk (general applications with no specific requirements). Taiwanese enterprises should prioritize auditing healthcare AI, HR recruitment systems, credit assessment tools, and any AI systems used in critical operational infrastructure, as these most commonly fall into the High Risk category. A systematic risk classification assessment conducted by a consultant with EU AI Act expertise is the most reliable first step, as the classification determination directly determines the compliance investment required for EU market access.
- What is the relationship between Taiwan's AI Basic Act, ISO 42001, and the EU AI Act? Does a company need to comply with all three separately?
- The three frameworks share substantially overlapping core principles but differ in legal force and geographic scope. Taiwan's AI Basic Act is a domestic principles-based framework establishing accountability, risk management, and fundamental rights protection as governing principles for AI use in Taiwan. The EU AI Act is a regulation with extraterritorial effect—any AI system used within the EU or affecting EU residents must comply, regardless of where the developer or deployer is headquartered. ISO 42001 is a voluntary international standard that is rapidly becoming a de facto supply chain procurement requirement. Winners Consulting Services Co. Ltd. recommends using ISO 42001 as a unified integration framework that satisfies all three simultaneously, avoiding the resource cost and management complexity of maintaining three independent compliance tracks.
- What concrete benefits does ISO 42001 certification provide, and how long does the
Was this article helpful?
Related Services & Further Reading
Want to apply these insights to your enterprise?
Get a Free Assessment