ai

Insight: Crossing the principle-practice gap in AI ethics with ethica

Published
Share

Winners Consulting Services Co. Ltd. (積穗科研股份有限公司), Taiwan's expert in AI Governance, draws your attention to a critical and persistent challenge in enterprise AI adoption: organizations worldwide can articulate AI ethics principles with confidence, yet consistently fail to translate those principles into actionable engineering and operational decisions. A 2024 peer-reviewed study, already cited 12 times in the academic community, introduces a breakthrough methodology called Ethical Problem-Solving (EPS) that directly addresses this gap—offering Taiwanese enterprises a practical, structured pathway toward ISO 42001 certification, EU AI Act compliance, and alignment with Taiwan's emerging AI Basic Law.

Paper Citation: Crossing the principle-practice gap in AI ethics with ethical problem-solving (Nicholas Kluge Corrêa, James William Santos, Camila Galvão, arXiv — AI Governance & Ethics, 2024)
Original Paper: https://doi.org/10.1007/s43681-024-00469-8

Read Original Paper →

About the Authors and This Research

This paper was co-authored by Nicholas Kluge Corrêa, James William Santos, and Camila Galvão—a trio of Brazilian AI ethics researchers who represent a rising generation of scholars committed to making responsible AI development practically achievable rather than aspirationally decorative.

Nicholas Kluge Corrêa is particularly notable in the global AI ethics community. His research spans AI safety, large language model evaluation, and the operationalization of responsible AI principles—areas at the exact intersection of technical development and governance frameworks. His work has been recognized for its dual commitment to rigorous academic inquiry and open-source tool development, a combination that makes his research unusually accessible for enterprise practitioners.

The paper has been cited 12 times since its 2024 publication—a significant figure given the recency of publication and the relatively specialized nature of the principle-to-practice gap problem. More tellingly, the authors did not stop at theory: they built and released a fully functional Ethics as a Service Platform, with all framework components openly available on GitHub under a permissive license. This commitment to practical implementation elevates this research well above the level of academic abstraction and into direct enterprise utility.

The Core Problem: Why AI Ethics Principles Keep Failing in Practice

The central question driving this research is deceptively simple but enormously consequential: why do organizations that genuinely care about ethical AI so consistently fail to embed those values into their actual AI systems?

The authors argue that the failure is structural, not attitudinal. The problem is not that companies lack commitment to AI ethics—most large enterprises have published AI ethics principles, established ethics review boards, or adopted voluntary frameworks. The problem is that these principles exist in a different register than the technical decisions made by developers, product managers, and procurement officers every day. There is no operational bridge between the two worlds.

Core Finding 1: The Principle-Practice Gap Requires a Methodological Bridge, Not Just Better Intentions

The EPS framework provides that methodological bridge through two core instruments. First, Impact Assessment Surveys provide a systematic diagnostic tool for identifying where a given AI application creates ethical risk—covering dimensions including data privacy, algorithmic bias, transparency, human oversight, and safety. These surveys are designed to be usable by non-specialists, meaning they can be embedded into standard product development workflows without requiring every team member to hold an AI ethics PhD.

Second, the Differential Recommendation Methodology ensures that the output of the impact assessment is not a generic checklist but a context-specific set of recommendations calibrated to the risk profile and application context of each AI system. This differential approach mirrors the risk-tiered logic of both ISO 42001's risk management requirements and the EU AI Act's four-tier risk classification system—making EPS a natural methodological companion to both frameworks.

Core Finding 2: "Ethics as a Service" Transforms AI Governance from Policy to Infrastructure

Perhaps the most practically significant contribution of this research is its demonstration that AI ethics governance can be architected as a repeatable, scalable service rather than a one-time policy exercise. The Ethics as a Service Platform (EaaS) that the team developed and open-sourced represents a proof of concept for embedding ethical evaluation directly into the AI development lifecycle—as a service layer that can be invoked repeatedly, updated as regulations evolve, and integrated with existing development and compliance tooling.

For Taiwanese enterprises building their AI governance infrastructure, this insight is transformative: you do not need to reinvent the wheel. The open-source EPS components provide a tested starting point that can be customized to incorporate the specific requirements of ISO 42001, the EU AI Act, and Taiwan's AI Basic Law.

Why This Research Matters for Taiwan's AI Governance Landscape

Taiwan's enterprise AI governance environment is entering a decisive period of transition. The convergence of international regulatory pressure from the EU AI Act, the global momentum behind ISO 42001 certification, and the domestic development of Taiwan's AI Basic Law means that the question is no longer whether Taiwanese companies need structured AI governance—it is whether they will build it proactively or reactively.

Alignment with ISO 42001 Requirements

ISO 42001, published in 2023 as the world's first AI management system standard, requires organizations to establish systematic mechanisms for identifying, assessing, and controlling AI-related risks. The EPS framework's impact assessment surveys map directly onto the planning requirements in Clause 6.1 of ISO 42001, while its differential recommendation methodology supports the operational requirements under Clause 8. Enterprises that adopt EPS-informed assessment processes will find themselves significantly ahead of the curve when preparing for ISO 42001 third-party audits.

Direct Response to EU AI Act Obligations

The EU AI Act, which entered into force in 2024, imposes mandatory risk management documentation requirements on high-risk AI applications. For Taiwan's export-oriented technology sector, compliance is not optional: any AI product or service used by European customers or affecting EU residents falls within the Act's scope. The EPS framework's four-tier risk differentiation logic directly mirrors the EU AI Act's risk classification categories (unacceptable risk, high risk, limited risk, minimal risk), providing Taiwanese enterprises with a ready-made organizational structure for meeting Article 9 risk management system requirements.

Pre-Alignment with Taiwan's AI Basic Law

Taiwan's AI Basic Law (AI基本法) is currently under legislative review. Its core architectural principles—human-centeredness, transparency, accountability, and proportionate risk management—align closely with the EPS framework's foundational values. Enterprises that begin building EPS-informed governance structures now will be positioned to complete regulatory compliance transitions at significantly lower cost when the final legislative framework is enacted.

How Winners Consulting Services Can Help Taiwan Enterprises Act on These Insights

Winners Consulting Services Co. Ltd. (積穗科研股份有限公司) helps Taiwanese enterprises build AI management systems that meet ISO 42001 and EU AI Act requirements, conduct AI risk classification assessments, and ensure AI applications comply with Taiwan's AI Basic Law. Drawing directly on the EPS framework's methodology, we recommend the following three concrete action steps:

  1. Establish an AI Impact Assessment Process (EPS-Aligned): Using the EPS impact assessment survey methodology as a template, Winners Consulting Services helps enterprises develop customized AI impact assessment instruments covering privacy, bias, transparency, human oversight, and safety dimensions. This directly satisfies the ISO 42001 Clause 6.1 risk assessment requirement. We deliver a complete first-round assessment across all active AI applications within 30 days.
  2. Build a Differential Risk Register Aligned to EU AI Act Categories: Leveraging the EPS differential recommendation methodology, Winners Consulting Services helps enterprises classify all AI applications into the four EU AI Act risk tiers and design corresponding control procedures, documentation requirements, and monitoring indicators for each tier. This creates the risk management system documentation required under EU AI Act Article 9 and positions the enterprise for ISO 42001 certification. We complete the initial risk register build within 60 days.
  3. Institutionalize "Ethics as a Service" Through an AI Governance Committee: The long-term goal of the EPS framework is to make ethical evaluation an intrinsic part of AI development rather than an external review. Winners Consulting Services helps enterprises establish cross-functional AI governance committees integrating legal, technical, business, and HR functions, building the recurring review mechanisms that keep ISO 42001 management systems operationally effective. Full institutional setup is completed within 90 days, reaching a state ready for third-party audit.

Winners Consulting Services Co. Ltd. offers a complimentary AI Governance Mechanism Diagnostic to help Taiwanese enterprises build an ISO 42001-aligned management system within 90 days.

Apply for Your Free Diagnostic →

Frequently Asked Questions

How do we know if our company has a principle-practice gap in AI ethics?
The most reliable diagnostic is a simple internal test: ask your AI development team how the company's AI ethics policy concretely changed what they built or how they built it last week. If the answer is vague, indirect, or "it didn't," you have a principle-practice gap. The EPS framework defines this gap as the structural absence of an operational bridge between ethical principles and technical decision-making—and it is alarmingly common even among enterprises that have invested significantly in AI ethics policies. Winners Consulting Services offers a 30-day diagnostic engagement that maps your current gap against ISO 42001 requirements and identifies the highest-priority points for remediation.
Does EU AI Act compliance apply to Taiwanese companies?
Yes, in many cases. The EU AI Act applies based on where AI systems are deployed or where their outputs are used—not where the developing company is headquartered. Any Taiwanese enterprise whose AI products or services are used by customers in the European Union, or whose AI-driven decisions affect EU residents, must comply with relevant provisions. High-risk AI applications face the most stringent requirements, including mandatory risk management systems under Article 9, conformity assessments, and technical documentation obligations. Taiwanese technology exporters and SaaS providers serving European markets should treat EU AI Act compliance as an immediate operational priority, not a future consideration.
What does ISO 42001 certification actually require, and how does EPS help?
ISO 42001 requires organizations to build and maintain a documented AI management system covering risk identification (Clause 6.1), operational controls (

Was this article helpful?

Share

Related Services & Further Reading

Want to apply these insights to your enterprise?

Get a Free Assessment