Winners Consulting Services Co. Ltd. (積穗科研股份有限公司), Taiwan's expert in AI Governance, highlights a landmark 2025 study that should reshape how every Taiwanese enterprise approaches AI compliance: responsible AI cannot be achieved through piecemeal checklists. A new integrative framework published on arXiv demonstrates that organizations must weave together five interdependent dimensions—domain definition, trustworthy AI design, auditability, accountability, and governance—into a continuously iterating loop in order to meet the simultaneous demands of ISO 42001 certification, EU AI Act enforcement, and Taiwan's emerging AI Basic Law.
Paper Citation: A Framework for Responsible AI Systems: Building Societal Trust through Domain Definition, Trustworthy AI Design, Auditability, Accountability, and Governance (Andrés Herrera-Poyatos, Javier Del Ser, Marcos López de Prado, arXiv — AI Governance & Ethics, 2025)
Original Paper: http://arxiv.org/abs/2503.04739v2
About the Authors and This Research
This paper brings together three researchers whose combined expertise spans mathematics, machine learning, and quantitative finance—an unusual yet powerful combination for tackling AI governance at the institutional level.
Andrés Herrera-Poyatos holds an h-index of 8 with 231 cumulative citations, focusing on algorithm design and explainable AI. His co-author Javier del Ser is a senior researcher at TECNALIA, Spain's largest private applied research center, with an h-index of 20 and more than 1,754 cumulative citations. Del Ser is internationally recognized for his work on trustworthy AI and federated learning—two areas directly critical to the framework presented in this paper. The third author, Marcos López de Prado, is a globally renowned expert in quantitative investment and AI applications, serving as an adjunct professor at Cornell University and founder of True Positive Technologies.
The cross-disciplinary nature of this authorship team gives the resulting framework a rare quality: it is technically rigorous while simultaneously grounded in institutional realities. For Taiwanese executives evaluating AI governance frameworks, this means the paper's recommendations are not purely academic—they reflect the operational constraints and legal exposure that real organizations face.
The Five-Dimensional RAIS Framework: Why Isolated Compliance Is No Longer Enough
The paper's central contribution is a comprehensive design blueprint for a Responsible AI System (RAIS) that explicitly rejects the fragmented, checklist-based approach that dominates current AI governance practice. The researchers systematically review global AI governance developments and identify three fundamental flaws in existing principles-based approaches: fragmentation across regulatory domains, an implementation gap between stated principles and actual deployment practice, and the absence of meaningful participatory governance mechanisms.
Core Finding 1: The Five Dimensions Must Form an Iterative Loop, Not a Linear Sequence
The RAIS framework requires that domain definition, trustworthy AI design, auditability, accountability, and governance operate as a continuously self-correcting system rather than sequential steps. Domain definition establishes the operational boundaries and risk context of an AI system—without this, subsequent design choices lack grounding. Trustworthy AI design ensures that the system embodies fairness, transparency, and robustness at the technical level. Auditability provides the structural basis for third-party verification, a requirement explicitly mandated for high-risk AI systems under the EU AI Act. Accountability assigns clear responsibility to individuals and institutions across the AI lifecycle, and governance coordinates oversight mechanisms across all stages from development through post-deployment.
The critical insight is interdependence: when any one of these dimensions changes—for example, when a model is retrained or deployed in a new market context—all other dimensions must be reassessed. This continuous recalibration is precisely what ISO 42001's Plan-Do-Check-Act (PDCA) cycle is designed to institutionalize, and it is what most current AI governance programs systematically fail to achieve.
Core Finding 2: The Implementation Gap Is the Most Dangerous Blind Spot in Current AI Governance
The researchers introduce the concept of the "implementation gap"—the chasm between an organization's stated AI governance principles and what actually happens when an AI system is deployed and operating in the real world. This gap is not a minor operational detail; it represents the primary vector through which AI risk materializes into legal liability, reputational damage, and societal harm.
To close this gap, the paper argues that organizations must establish two capabilities that most current governance programs lack: post-deployment monitoring (systematic observation of AI system behavior after launch) and risk-based auditing (ongoing verification that AI systems continue to meet their defined risk criteria over time). These are not optional enhancements—under the EU AI Act's requirements for high-risk AI systems, they are mandatory compliance obligations. For Taiwanese companies supplying AI-enabled products or services to European markets, this has immediate practical implications beginning in 2025.
Core Finding 3: Sector-Specific Adaptation Is Essential for Operationalization
The paper explicitly identifies sector-specific adaptation as one of the critical challenges for making the RAIS framework actionable. A framework designed for a financial services AI application will have fundamentally different domain definition parameters, risk tolerances, and accountability structures than one designed for a healthcare diagnostics system or a manufacturing quality control application. The researchers argue that generic governance frameworks, applied without sector-specific calibration, produce compliance theater rather than genuine risk reduction. This finding has direct implications for Taiwanese enterprises in semiconductor manufacturing, financial services, healthcare technology, and export-oriented software development—each sector requires a tailored instantiation of the five-dimensional framework.
Implications for Taiwan's AI Governance Practice: Navigating Three Regulatory Frameworks Simultaneously
Taiwanese enterprises in 2025 face an unprecedented convergence of regulatory pressure from three distinct frameworks, and this paper's findings speak directly to each.
ISO 42001, published in 2023 as the world's first international standard for AI management systems, requires organizations to establish systematic AI risk management, accountability mechanisms, and continuous improvement processes. Its PDCA structure maps directly onto the iterative loop described in the RAIS framework. Achieving ISO 42001 certification is no longer merely a competitive differentiator—it is increasingly a prerequisite for enterprise AI procurement in regulated industries and government-adjacent markets.
The EU AI Act entered into force in 2024 and is rolling out mandatory compliance obligations for high-risk AI categories beginning in 2025. The extraterritorial application principle means that any Taiwanese company whose AI outputs are used by EU-based users, or that has business relationships with European partners, must comply. Penalties for non-compliance can reach 7% of global annual turnover—a risk that cannot be managed through benign neglect.
Taiwan's own AI Basic Law is advancing through the legislative process and is expected to establish a domestic AI risk classification system and accountability framework aligned with international standards. Organizations that build ISO 42001-compliant AI governance structures now will be substantially better positioned when domestic legislation creates mandatory compliance obligations.
The paper's emphasis on participatory governance carries a particularly important message for Taiwanese executives: organizations cannot afford to wait for regulators to define the rules before beginning to build governance capacity. The regulatory environment is evolving, and organizations that engage proactively—contributing to shaping governance norms rather than merely reacting to them—will achieve better outcomes on both compliance and innovation dimensions.
How Winners Consulting Services Helps Taiwanese Enterprises Implement the RAIS Framework
Winners Consulting Services Co. Ltd. (積穗科研股份有限公司) provides end-to-end AI governance consulting services designed to help Taiwanese enterprises bridge the implementation gap identified in this research. Our services are structured to address each of the five RAIS dimensions systematically, with practical deliverables at every stage.
- Five-Dimensional Governance Gap Assessment: We evaluate your existing AI applications against the RAIS framework's five dimensions and ISO 42001 clause requirements, producing a quantified gap report with prioritized remediation recommendations. This assessment identifies which AI systems present the highest risk exposure under both EU AI Act and Taiwan AI Basic Law classification criteria.
- Risk Classification System Design and Post-Deployment Monitoring Architecture: We design AI risk assessment matrices calibrated to your specific industry sector, incorporating EU AI Act high-risk category definitions and anticipated Taiwan AI Basic Law classification criteria. We then architect post-deployment monitoring indicator systems to ensure that AI systems continue to operate within defined risk parameters after launch—directly addressing the implementation gap identified in the research.
- ISO 42001 Certification Preparation and Audit Readiness: We provide comprehensive ISO 42001 certification support, from document system design and personnel training through internal audit simulation and certification body coordination. Our accelerated 90-day implementation pathway is designed to help Taiwanese enterprises achieve certification efficiently while simultaneously building genuine governance capability rather than compliance theater.
Winners Consulting Services Co. Ltd. offers a complimentary AI Governance Mechanism Diagnostic to help Taiwanese enterprises identify their compliance gaps and build ISO 42001-aligned management systems within 90 days.
Apply for Free Mechanism Diagnostic →Frequently Asked Questions
- How does the RAIS five-dimensional framework differ from our existing AI risk management process?
- The critical difference lies in dynamic interdependence versus static sequencing. Most organizations treat AI governance as a one-time exercise: complete a risk assessment, build documentation, obtain certification, and stop. The RAIS framework requires that all five dimensions continuously recalibrate in response to each other. When an AI model is retrained, domain definition must be re-verified, auditability mechanisms must be updated, and accountability assignments must be confirmed. This continuous recalibration is the operational expression of ISO 42001's PDCA cycle. Winners Consulting Services assesses your organization's capacity for dynamic recalibration and identifies where static compliance creates hidden risk exposure.
- What is the most common mistake Taiwanese companies make regarding EU AI Act compliance?
- The most dangerous misconception is that EU AI Act obligations only apply to companies physically located in the European Union. The Act applies extraterritorially: any AI system whose outputs affect EU-based users triggers compliance obligations, regardless of where the system's developer or operator is headquartered. Taiwanese SaaS companies serving European clients, manufacturers exchanging data with European supply chain partners, and Taiwanese subsidiaries of European groups all face direct EU AI Act exposure. High-risk AI category obligations began phasing in from 2025, with penalties up to 7% of global annual turnover for serious violations. Early compliance assessment is strongly advised.
- What is the practical significance of ISO 42001 certification, and how does it relate to EU AI Act compliance?
- ISO 42001, published in 2023, is the world's first international AI management system standard. It requires organizations to establish systematic AI risk identification, accountability structures, and continuous improvement mechanisms—precisely the institutional infrastructure described in the RAIS framework's governance and accountability dimensions. ISO 42001 certification provides documented evidence of governance maturity that is directly relevant to demonstrating EU AI Act compliance in areas including risk management documentation, transparency obligations, and accountability mechanisms. Taiwan's AI Basic Law is expected to establish similar requirements. Achieving ISO 42001 certification therefore represents a strategic investment that simultaneously addresses three regulatory frameworks, maximizing return on compliance investment.
- How long does it take to build an ISO 42001-compliant AI governance framework, and what are the specific steps?
- Based on Winners Consulting Services' implementation experience with Taiwanese enterprises, building an ISO 42001-compliant framework from baseline typically requires 90 to 180 days, depending on organizational size and AI application complexity. The process follows four stages: Stage 1 (Days 1–30): current-state diagnostic and gap analysis against ISO 42001 clause requirements; Stage 2 (Days 31–60): governance mechanism design, including AI risk assessment matrix, accountability procedure documentation, and audit trail architecture; Stage 3 (Days 61–90): implementation and training, including personnel competency development and internal audit simulation; Stage 4 (Days 91–180): external certification preparation, ongoing monitoring indicator establishment, and continuous improvement mechanism activation. Organizations with complex AI portfolios or multiple high-risk AI applications should anticipate the longer end of this range and begin immediately.
- Why should Taiwanese enterprises choose Winners Consulting Services for AI governance advisory?
- Winners Consulting Services Co. Ltd. (積穗科研股份有限公司) is one of Taiwan's few consulting organizations with demonstrated capability across ISO 42001 implementation, EU AI Act compliance analysis, and Taiwan AI Basic Law policy interpretation. Our team continuously tracks international AI governance research and regulatory development to ensure that client recommendations reflect current global best practice. Unlike generalist consultancies, our AI governance practice is structured around the operational dimensions—domain definition, trustworthy design, auditability, accountability, and governance—that research demonstrates are essential for genuine risk reduction rather than compliance theater. We offer a complimentary AI governance mechanism diagnostic as a no-risk entry point, enabling Taiwanese enterprises to quickly understand their compliance gaps and prioritize remediation investments with confidence.
Was this article helpful?
Related Services & Further Reading
Want to apply these insights to your enterprise?
Get a Free Assessment