ai

Insight: Building the ethical AI framework of the future: from philos

Published
Share
=================================

Winners Consulting Services Co. Ltd. (積穗科研股份有限公司), Taiwan's expert in AI Governance, sees a defining inflection point in enterprise AI risk management: a landmark 2026 study published in AI and Ethics (Springer) demonstrates that the most consequential gap in today's AI governance landscape is not the absence of ethical principles—it is the absence of enforceable, lifecycle-stage-specific operational controls that can actually stop a risky AI system from moving forward. As Taiwan enterprises accelerate AI adoption and face mounting pressure from ISO 42001 certification requirements, EU AI Act obligations, and the emerging implementation rules of Taiwan's AI Basic Act, this research offers the most actionable blueprint yet for translating ethical commitments into auditable engineering practice.

Paper Citation: Building the ethical AI framework of the future: from philosophy to practice(Jasper Kyle Catapang,arXiv — AI Governance & Ethics,2026)
Original Paper: https://doi.org/10.1007/s43681-026-01003-8

Read Original Paper →

About the Author and This Research

Jasper Kyle Catapang is an emerging scholar at the intersection of AI ethics and operational governance, with an h-index of 4 and 39 cumulative academic citations. What distinguishes Catapang's contribution is not volume but precision: he occupies a rare intellectual position that bridges normative ethical philosophy—consequentialism, deontology, and virtue ethics—with the engineering control structures that AI development teams actually use, specifically MLOps pipelines and CI/CD systems. Most AI ethics researchers operate exclusively at one level or the other. Catapang's 2026 paper, published in the Springer journal AI and Ethics, is a systematic attempt to close this translation gap once and for all.

The paper's methodological credibility is bolstered by a pre-registered evaluation protocol: rather than proposing a framework and leaving its effectiveness untested, Catapang defines ex ante success criteria—specified before the research is conducted—enabling falsifiable, reproducible evaluation of whether each gate actually works. This design choice reflects a level of scientific rigor that distinguishes the paper from the large body of AI ethics literature that remains aspirational rather than empirically testable.

The Triple-Gate Architecture: Turning Ethical Philosophy Into Enforceable Engineering Controls Across the AI Lifecycle

The central insight of this research is deceptively simple but operationally transformative: every stage of the AI lifecycle—data collection, model training, deployment, and post-deployment monitoring—concentrates specific ethical risks, and each stage therefore needs specific, quantifiable gate controls that must be passed before the system can advance. Catapang calls this the Triple-Gate structure, and it represents the most concrete operationalization of ethics-by-design principles published to date.

Core Finding 1: Three Gates, Three Dimensions of Ethical Risk—All Mandatory at Every Lifecycle Stage

The three gates operate in parallel, not in sequence, and all three must clear before a system can progress:

Metric Gates enforce quantitative performance and safety thresholds. These include statistical fairness metrics (such as demographic parity and equalized odds), hallucination rate ceilings for large language models, adversarial robustness scores, and output toxicity limits. If a model's bias ratio exceeds a pre-specified threshold during training evaluation, the Metric Gate triggers an escalation path—the system does not proceed to deployment regardless of other considerations.

Governance Gates enforce legal, rights-based, and procedural compliance. This gate maps directly onto EU AI Act Article 9 risk management obligations, ISO 42001 documentation and control requirements, and NIST AI Risk Management Framework Govern and Identify functions. The Governance Gate ensures that every AI decision node in the pipeline has a documented legal basis, has undergone rights-impact assessment, and has completed required notification or consent procedures. For Taiwan enterprises, this gate is the structural home for compliance with the transparency obligations and human oversight requirements of both the EU AI Act and Taiwan's AI Basic Act.

Eco Gates enforce carbon and water budget constraints—a dimension that no major existing governance framework, including ISO 42001 or EU AI Act, currently mandates as a lifecycle control. The Eco Gate sets a maximum training carbon budget (expressed in kgCO₂e) and a water usage ceiling for cooling infrastructure. If a training run is projected to exceed the budget, the gate triggers before resources are committed. For Taiwan's listed companies operating under TCFD disclosure requirements and voluntary ESG commitments, this gate provides the first systematic mechanism for making AI sustainability accountability operational rather than aspirational.

Each gate is accompanied by explicit trigger conditions, escalation paths (specifying who is notified and what decisions must be made at each escalation level), and audit artefacts—structured records that can be produced during regulatory inspection, third-party certification audit, or internal governance review.

Core Finding 2: LLM Pipeline Examples Show That Pre-Release Gate Controls Catch Risks That Post-Release Audits Cannot

The paper illustrates the framework through detailed large language model (LLM) pipeline examples, demonstrating how gate-based controls surface risks that traditional post-release auditing consistently misses. In the data collection stage, a Governance Gate check on data provenance and consent status can block training on datasets that would later expose the enterprise to intellectual property liability or GDPR-equivalent violations. In the model training stage, a Metric Gate on output distribution across demographic groups can surface algorithmic bias before it becomes embedded in a production system. In the deployment stage, an Eco Gate on inference infrastructure energy consumption can prevent a model from going live until a lower-carbon serving architecture is certified.

Critically, the gate controls integrate directly with MLOps platforms and CI/CD pipelines—they are not separate compliance workflows requiring human intervention at each step. This integration design means that for organizations operating at scale, gate enforcement can be largely automated, with human escalation reserved for cases where quantitative thresholds signal genuine ethical risk rather than routine process variation.

The paper also provides direct mapping tables between gate triggers and EU AI Act obligations (particularly for high-risk AI systems under Annex III) and NIST AI RMF functions (Govern, Map, Measure, Manage), enabling enterprises to use the Triple-Gate architecture as a single implementation structure that satisfies multiple regulatory reporting requirements simultaneously.

What This Means for Taiwan's AI Governance Practice: Three Urgent Implications for Enterprise Leaders

For Taiwan enterprise executives navigating the convergence of ISO 42001, EU AI Act, and Taiwan's AI Basic Act, this research surfaces three implications that demand immediate strategic attention.

First: ISO 42001 certification requires operational controls, not just policy documents. ISO 42001, published in 2023 as the world's first AI management system standard, explicitly requires organizations to establish AI risk classification mechanisms, documented risk treatment procedures, and verifiable control measures. The Triple-Gate architecture provides the operational blueprint that bridges ISO 42001 clause requirements and engineering implementation. Enterprises preparing for ISO 42001 certification can map each gate directly onto the standard's risk management and control requirements, producing the audit artefacts that certification bodies will expect to review.

Second: EU AI Act compliance timelines are already running for Taiwan exporters. The EU AI Act entered into force in August 2024, with prohibited practice provisions applying from February 2025 and high-risk AI system requirements applying from August 2026. Taiwan enterprises supplying AI-enabled products or services to EU markets, or whose supply chains include EU customers, face direct compliance obligations including transparency requirements, technical documentation, conformity assessments, and human oversight mechanisms. The Triple-Gate architecture already maps to EU AI Act obligations by article, making it one of the most efficient compliance pathways available for Taiwan enterprises seeking to demonstrate conformity without building redundant parallel systems.

Third: Taiwan's AI Basic Act is generating subsidiary regulations—proactive governance now prevents reactive scrambling later. Taiwan's Artificial Intelligence Basic Act passed in 2025 and subsidiary regulations are currently being developed by sector-specific regulatory authorities. Organizations that establish robust, internationally-aligned AI governance mechanisms now—particularly mechanisms that can produce the kind of audit artefacts, escalation records, and quantitative safety evidence that regulators worldwide are converging on—will face dramatically lower compliance costs and reputational risk when those subsidiary regulations take effect. The Triple-Gate framework's emphasis on measurable trigger conditions and documented escalation paths positions early adopters for regulatory confidence rather than reactive remediation.

How Winners Consulting Services Co. Ltd. Translates the Triple-Gate Framework Into Certified AI Management Systems for Taiwan Enterprises

積穗科研股份有限公司(Winners Consulting Services Co. Ltd.)協助台灣企業建立符合 ISO 42001 與 EU AI Act 要求的 AI 管理系統,進行 AI 風險分級評估,確保人工智慧應用符合台灣 AI 基本法規範。Drawing directly from the research findings of Catapang's 2026 framework, we recommend the following three-phase action plan for Taiwan enterprise leaders:

  1. AI Lifecycle Gate Diagnostic (Days 1–30): Conduct a systematic inventory of all AI systems in production or development, mapping each system to its lifecycle stage and assessing whether quantitative Metric Gate thresholds, Governance Gate compliance checkpoints, and Eco Gate sustainability limits currently exist in any form. Produce a prioritized gap report cross-referenced against ISO 42001 clause requirements and EU AI Act risk classification (prohibited, high-risk, limited-risk, minimal-risk). This diagnostic becomes the foundation for all subsequent governance investment decisions.
  2. Triple-Gate Design and MLOps Integration (Days 31–60): Design stage-specific gate specifications tailored to each enterprise AI system's risk profile and regulatory classification. For each gate, define trigger conditions (quantitative thresholds with explicit measurement methodology), escalation paths (named roles, decision authorities, and time limits at each escalation level), and audit artefact templates (structured records ready for ISO 42001 certification audit and EU AI Act conformity assessment). Integrate gate enforcement into existing MLOps and CI/CD infrastructure to ensure governance controls operate at the engineering level, not just the policy level.

Was this article helpful?

Share

Related Services & Further Reading

Want to apply these insights to your enterprise?

Get a Free Assessment