Winners Consulting Services Co. Ltd. (積穗科研股份有限公司), Taiwan's expert in AI Governance, highlights a pivotal 2024 research finding: AI guardrails must be customizable, transparent, and conflict-aware to be genuinely effective — and organizations that deploy AI without such mechanisms are already non-compliant with ISO 42001, EU AI Act, and Taiwan's emerging AI Basic Act. This analysis examines why this academic work matters for every Taiwanese business deploying AI today.
Paper Citation: AI Ethics by Design: Implementing Customizable Guardrails for Responsible AI Development (Kristina Šekrst, Jeremy McHugh, Jonathan Rodriguez Cefalu, arXiv — AI Governance & Ethics, 2024)
Original Paper: http://arxiv.org/abs/2411.14442v1
About the Authors and This Research
Kristina Šekrst is a philosopher and AI ethics researcher whose work bridges analytic philosophy, linguistics, and machine cognition. Her interdisciplinary perspective brings rare conceptual rigor to applied AI governance, a field often dominated by either pure technical discourse or vague corporate policy statements. Jeremy McHugh is an applied AI safety researcher with an h-index of 3 and 96 cumulative citations, specializing in the practical design and evaluation of responsible AI mechanisms — a background that grounds this paper's theoretical claims in implementable solutions. Jonathan Rodriguez Cefalu contributes engineering and system-design expertise, ensuring the proposed framework is not merely academically interesting but architecturally deployable in real enterprise environments.
Published in 2024 on arXiv's AI Governance and Ethics track, this paper arrives at a critical inflection point: ISO 42001 was officially published in late 2023, the EU AI Act passed in 2024, and Taiwan's AI Basic Act framework is actively advancing through legislative review. The paper's findings directly inform the most pressing governance questions that boards, compliance officers, and CIOs across Taiwan are confronting right now.
The Core Insight: One-Size Guardrails Are a Governance Liability
The central argument of this research is both simple and profound: the current generation of AI guardrails — rules hard-coded by AI vendors with limited transparency or configurability — creates a false sense of ethical compliance while leaving organizations exposed to legal and reputational risk. The researchers propose a three-layer framework integrating Rules, Policies, and AI Assistants as a dynamic, customizable guardrail system. They benchmark this framework against existing state-of-the-art guardrail approaches, demonstrating superior performance across flexibility, transparency, and user autonomy dimensions.
Core Finding 1: Ethical Pluralism Is Not a Bug — It Is the Requirement
The research establishes that no single ethical framework can universally govern all AI use cases. Medical AI in Taiwan operates under different normative expectations than financial AI in Germany or manufacturing AI in Vietnam. The EU AI Act's Article 9 requires high-risk AI systems to implement risk management systems that account for the "intended purpose" and "foreseeable misuse" in specific deployment contexts — a requirement that is impossible to fulfill with a one-size guardrail. The proposed framework's configurability directly addresses this gap: organizations can establish context-specific ethical priority hierarchies without abandoning a consistent underlying governance architecture. For Taiwanese enterprises, this means a single enterprise-wide AI governance framework can simultaneously satisfy the transparency requirements of Taiwan's AI Basic Act, the risk documentation requirements of ISO 42001, and the accountability requirements of the EU AI Act — but only if the underlying guardrail system is designed for multi-context configurability from the outset.
Core Finding 2: Conflict Resolution Mechanisms Are the Missing Link in Enterprise AI Governance
The paper identifies what may be the most underappreciated gap in enterprise AI governance today: the absence of explicit, documented conflict resolution logic when ethical directives collide. In practice, conflicts occur constantly — between privacy protection and personalization, between efficiency optimization and human oversight, between commercial objectives and user wellbeing. When an AI system encounters such conflicts without pre-defined resolution logic, it either defaults to unpredictable behavior or, more commonly, silently resolves the conflict in ways that favor the system's optimization objective rather than the organization's stated values. ISO 42001's Section 6.1 requires organizations to identify AI-related risks and establish treatment plans — but without explicit conflict resolution mechanisms, even a well-documented risk register will fail to prevent in-production ethical failures. The research demonstrates that context-aware conflict resolution, embedded at the policy layer of the guardrail framework, is the mechanism that transforms a compliance document into an operational governance system.
Implications for Taiwan's AI Governance Landscape
Taiwan's AI governance environment is evolving rapidly across three simultaneous dimensions, and the findings of this research are directly actionable across all three.
ISO 42001 Certification Is Accelerating in Taiwan. ISO 42001, published in December 2023, is the world's first international standard for AI Management Systems. It requires organizations to establish, implement, maintain, and continually improve an AI management system covering the entire AI system lifecycle. Taiwanese technology manufacturers, financial institutions, and healthcare providers are increasingly pursuing ISO 42001 certification as a market differentiator and procurement prerequisite. The three-layer guardrail framework proposed in this paper maps directly to ISO 42001's requirements: the Rules layer corresponds to Section 8.4's AI system operational controls; the Policies layer implements Section 6.1's risk treatment processes; and the AI Assistant layer supports Section 9.1's monitoring and measurement requirements. Organizations pursuing ISO 42001 certification without an explicit guardrail architecture will encounter significant gaps during the certification audit.
EU AI Act Compliance Is a Business Reality for Taiwan's Export Sector. The EU AI Act, which entered into force in August 2024, applies extraterritorially to any AI system whose outputs are used within the EU — a provision that directly affects Taiwan's technology exporters, semiconductor designers, software-as-a-service providers, and manufacturers supplying European customers. The Act's four-tier risk classification (unacceptable, high, limited, and minimal risk) requires high-risk AI systems to implement conformity assessments, technical documentation, and human oversight mechanisms. The conflict resolution mechanisms highlighted in the 2024 arXiv paper are precisely the kind of "technical robustness and safety" measures that Article 15 of the EU AI Act requires for high-risk AI systems. Taiwanese enterprises that delay building these mechanisms will face market access restrictions as EU enforcement ramps up through 2025 and 2026.
Taiwan's AI Basic Act Framework Demands Proactive Governance Architecture. Taiwan's Executive Yuan has been developing an AI Basic Act framework that emphasizes human-centric design, transparency, inclusivity, and sustainable development as foundational principles. The paper's emphasis on user autonomy — ensuring that AI systems respect users' rights to understand and challenge AI decisions — directly aligns with the transparency and accountability principles embedded in Taiwan's legislative direction. Enterprises that establish guardrail frameworks now, before the AI Basic Act formally takes effect, will be positioned to demonstrate proactive governance rather than reactive compliance.
How Winners Consulting Services Co. Ltd. Translates Research Into Enterprise Action
Winners Consulting Services Co. Ltd. (積穗科研股份有限公司) bridges the gap between academic AI governance research and enterprise implementation, helping Taiwanese organizations build AI management systems that are simultaneously compliant with ISO 42001, aligned with EU AI Act requirements, and consistent with Taiwan's AI Basic Act principles.
- AI Guardrail Architecture Assessment: Using the three-layer framework (Rules / Policies / AI Assistants) from this research as an evaluation lens, Winners conducts a systematic audit of your organization's existing AI deployments, documenting current ethical control mechanisms, identifying architectural gaps against ISO 42001 Section 6.1 requirements, and producing a prioritized remediation roadmap. This assessment typically identifies 3 to 7 high-priority governance gaps that require immediate attention before any certification audit or regulatory inquiry.
- Customizable Ethics Framework Design: Applying the paper's ethical pluralism principle, Winners designs context-specific guardrail configurations for each of your organization's AI use cases, establishing explicit priority hierarchies for ethical directives, documenting conflict resolution logic, and aligning the framework architecture with EU AI Act high-risk AI system requirements and Taiwan's AI Basic Act transparency expectations. The output is a governance document set that is immediately usable as ISO 42001 certification evidence.
- ISO 42001 Certification Preparation and AI Risk Classification: Winners provides end-to-end support for ISO 42001 certification, including AI risk assessment and classification (directly mapping to EU AI Act risk tiers), management system documentation, staff training on operational guardrail procedures, and pre-audit mock reviews. Our typical 90-day engagement model is designed to bring mid-sized Taiwanese enterprises from zero documentation to audit-ready status within a single fiscal quarter.
Winners Consulting Services Co. Ltd. offers a complimentary AI Governance Mechanism Diagnostic, helping Taiwanese enterprises establish an ISO 42001-compliant management system within 90 days.
Apply for Free Mechanism Diagnostic →Frequently Asked Questions
- What is the practical first step for a Taiwanese enterprise that has no AI guardrail framework in place?
- The most effective first step is an AI inventory and risk classification exercise. Before designing guardrails, organizations need to know what AI systems they are running, what decisions those systems influence, and which of those applications fall into EU AI Act high-risk categories or require enhanced transparency under Taiwan's AI Basic Act. Once the inventory is complete, the three-layer guardrail framework proposed in this research provides a clear design template: start by documenting absolute prohibitions at the Rules layer, then build out policy-level guidance for context-specific scenarios, and finally integrate AI assistant monitoring at the operational layer. Winners Consulting typically completes this initial classification and framework design phase within 30 days.
- Does EU AI Act apply to Taiwanese companies that are not registered in Europe?
- Yes. The EU AI Act applies extraterritorially to any provider or deployer of AI systems whose outputs are used within the European Union, regardless of where the provider is incorporated. This means that Taiwanese manufacturers using AI in production processes that supply European customers, Taiwanese software companies offering SaaS products to European users, and Taiwanese enterprises deploying AI in HR or financial decisions affecting EU-based employees or customers are all within scope. The Act's high-risk provisions, including Article 9's risk management system requirements and Article 15's technical robustness standards, apply from the moment a high-risk AI system's outputs reach EU territory. Taiwanese enterprises should conduct an EU AI Act scope assessment as an immediate priority.
- How does ISO 42001 certification relate to the guardrail framework described in this paper?
- ISO 42001 is the international management system standard for AI, published in December 2023. It provides an organizational governance framework requiring documented processes, risk management, and continuous improvement for AI systems throughout their lifecycle. The guardrail framework in this paper — specifically its Rules, Policies, and AI Assistant layers — provides the technical implementation architecture that ISO 42001's management system requirements demand but do not prescribe in detail. ISO 42001 Section 6.1 requires organizations to identify and
Was this article helpful?
Related Services & Further Reading
Want to apply these insights to your enterprise?
Get a Free Assessment