Winners Consulting Services Co. Ltd. (積穗科研股份有限公司), Taiwan's expert in AI Governance, urges enterprise leaders to recognize a critical insight: AI ethics is no longer a philosophical exercise—it is a technical architecture requirement with legal consequences. A landmark 2018 study by Nanyang Technological University researchers, whose cumulative academic citations exceed 5,514, established the first systematic taxonomy of technical approaches to ethical AI decision-making. That taxonomy now serves as a direct conceptual map for ISO 42001 compliance, EU AI Act risk classification, and Taiwan's emerging AI Basic Law—and Taiwanese companies that have not yet acted are already five or more years behind their international peers.
Paper Citation: Building Ethics into Artificial Intelligence(Han Yu、Zhiqi Shen、Chunyan Miao,arXiv — AI Governance & Ethics,2018)
Original Paper: http://arxiv.org/abs/1812.02953v1
About the Authors and Their Research
The paper was produced by a research team at Nanyang Technological University (NTU) in Singapore, one of Asia's most internationally recognized institutions in artificial intelligence research. The three authors—Han Yu, Zhiqi Shen, and Chunyan Miao—bring complementary expertise that gives this paper unusual breadth and depth.
Professor Zhiqi Shen is the academic anchor of this team. With an h-index of 37 and a cumulative citation count exceeding 5,514, he is a globally recognized authority in multi-agent systems, knowledge engineering, and AI decision frameworks. His work has been published and presented at the top four AI research venues explicitly reviewed in this paper: AAAI (Association for the Advancement of Artificial Intelligence), AAMAS (International Conference on Autonomous Agents and Multi-Agent Systems), ECAI (European Conference on Artificial Intelligence), and IJCAI (International Joint Conference on Artificial Intelligence). Professor Chunyan Miao contributes expertise in human-agent interaction and AI applications in social contexts, bringing the human-facing dimension of AI ethics into the technical discussion.
What makes this 2018 paper particularly significant for today's governance practitioners is its timing. Published at the moment when global discourse on AI ethics was still largely confined to philosophy departments and policy think tanks, it made a deliberate argument for technical rigor: ethics must be engineered, not merely declared. That argument has since been vindicated by every major regulatory framework introduced between 2021 and 2024.
The Four-Category Taxonomy: Turning Ethics into Executable Technical Specifications
The core intellectual contribution of this research is a taxonomy that divides technical approaches to AI ethics into four distinct categories. For enterprise leaders and compliance officers, this taxonomy is not merely academic—it provides a structured lens through which to evaluate whether an AI system's governance controls are comprehensive or dangerously incomplete.
Category One: Exploring Ethical Dilemmas
The first category addresses how AI systems can be designed to recognize and analyze situations where no single rule produces an acceptable outcome—the computational equivalent of the classical trolley problem. Research in this category focuses on encoding the structure of ethical conflict into machine-readable formats, so that AI systems can at minimum identify when they are operating in ethically ambiguous territory. For Taiwanese enterprises, this translates directly into the risk identification requirements of ISO 42001 Clause 6: before an AI system can be managed for risk, the organization must first be able to enumerate the ethical dilemma scenarios the system might encounter. Companies that have not conducted this scenario-mapping exercise cannot credibly claim ISO 42001 readiness.
Category Two: Individual Ethical Decision Frameworks
The second category examines how a single AI agent makes ethical trade-offs—for instance, how a credit-scoring algorithm balances predictive accuracy against demographic fairness. The research in this category draws heavily on moral philosophy traditions (consequentialism, deontology, virtue ethics) but translates them into algorithmic constraints and objective functions. Under the EU AI Act, high-risk AI systems—including those used in hiring, credit assessment, education, and critical infrastructure—are required to implement precisely this kind of documented, auditable decision logic. Article 9 of the EU AI Act mandates a risk management system that covers the entire lifecycle of the AI system, including how individual decisions are made and justified.
Category Three: Collective Ethical Decision Frameworks
Perhaps the most practically underappreciated finding in the paper is the distinction between individual and collective ethical decision-making in AI systems. When multiple AI agents interact—as happens routinely in enterprise supply chains, financial trading systems, and logistics networks—individual-level compliance does not guarantee system-level ethical outcomes. Bias can be amplified, accountability can diffuse, and emergent behaviors can violate ethical principles that each component system individually respects. This insight is directly relevant to the EU AI Act's requirements for AI systems used as components within larger AI pipelines, and to ISO 42001's expectation that organizations assess AI risks at both the system level and the organizational level.
Category Four: Ethics in Human-AI Interactions
The fourth category shifts attention from the AI system's internal decision logic to the interface between the AI system and the human beings who use, oversee, or are affected by it. The paper argues—presciently, given the subsequent direction of global regulation—that transparency and explainability are not optional features but fundamental ethical requirements for any AI system that influences human decisions. Taiwan's AI Basic Law draft explicitly references explainability as a core principle, and the EU AI Act's Article 13 mandates transparency requirements for high-risk AI systems. Organizations that have not conducted a human-AI interaction audit against these criteria are exposed to compliance risk even if their underlying models are technically sound.
Why This Research Matters Urgently for Taiwan's Enterprise Leaders in 2025
Three regulatory frameworks are simultaneously converging on Taiwanese enterprises, creating a compliance pressure that is greater than the sum of its parts.
ISO 42001 was formally published in December 2023 as the world's first international standard for AI management systems. It requires organizations to establish a documented Artificial Intelligence Management System (AIMS), covering AI policy, risk assessment, ethical decision oversight, human resource competency for AI, and continuous improvement mechanisms. The standard's Annex A control measures map closely onto the four-category taxonomy proposed in this paper, making the paper a useful conceptual reference for Gap Analysis against ISO 42001 requirements.
The EU AI Act entered into force on August 1, 2024, with a phased implementation timeline concluding in August 2026. Its territorial scope extends to any AI system whose outputs are used within the European Union, regardless of where the developer is located. Taiwanese manufacturers, software vendors, and service providers with EU-facing business relationships—whether direct or through OEM arrangements—are within scope. The Act's four-tier risk classification (unacceptable risk, high risk, limited risk, minimal risk) provides the enforcement framework that corresponds to the collective and individual ethical decision categories identified in this paper.
Taiwan's AI Basic Law draft, currently advancing through the Executive Yuan's legislative process, mirrors the EU AI Act's structural approach. It emphasizes transparency, accountability, and human rights protection as foundational principles. For domestically-focused Taiwanese enterprises, this will become the primary compliance benchmark for government procurement contracts and financial sector regulatory requirements.
The convergence of these three frameworks means that building an ISO 42001-compliant AIMS is not three separate compliance projects—it is one integrated governance architecture that simultaneously addresses all three regulatory environments.
Winners Consulting Services Co. Ltd.: Translating Research into Auditable Governance Architecture
Winners Consulting Services Co. Ltd. (積穗科研股份有限公司) assists Taiwanese enterprises in designing, implementing, and certifying AI management systems that satisfy ISO 42001, EU AI Act, and Taiwan AI Basic Law requirements simultaneously. Our approach is grounded in the technical research tradition represented by this paper—treating AI ethics as an engineering discipline, not a communications exercise.
- AI Ethical Scenario Mapping (aligned with Category One of the paper's taxonomy): We facilitate structured workshops to systematically identify the ethical dilemma scenarios your AI systems may encounter. This output serves as the foundational input for ISO 42001 Clause 6 risk assessment and EU AI Act conformity assessment documentation. Our standard scenario-mapping engagement produces a comprehensive scenario register within 5 to 10 business days.
- Dual-Level Risk Classification Assessment (aligned with Categories Two and Three): We conduct both individual-system risk assessments (evaluating each AI application against the EU AI Act's four-tier risk classification) and cross-system interaction assessments (evaluating emergent risks when multiple AI systems operate in concert). The output is a Risk Treatment Plan that satisfies ISO 42001 Clause 6.1 requirements and provides the documented risk management system required by EU AI Act Article 9.
- Human-AI Interaction Transparency Audit (aligned with Category Four): For AI systems that directly influence employee or customer decisions—including HR screening tools, performance management AI, credit assessment systems, and customer-facing recommendation engines—we conduct interface transparency reviews. We assess explainability design, appeal mechanism availability, and human oversight provisions, producing the documentation required for ISO 42001 Annex A controls A.6.1 through A.6.2, and the transparency disclosures required by EU AI Act Article 13.
Winners Consulting Services Co. Ltd. offers a complimentary AI Governance Diagnostic, helping Taiwanese enterprises establish an ISO 42001-aligned management mechanism within 90 days.
Apply for Your Free Governance Diagnostic →Frequently Asked Questions
- Our AI systems are already deployed in production. Is it too late to build ethics frameworks around them retroactively?
- It is not too late, and acting now is significantly more cost-effective than waiting for a regulatory incident. ISO 42001 explicitly accommodates retrospective risk assessment and control implementation for existing AI systems—there is no requirement to rebuild or replace deployed systems. The critical deliverable is governance documentation: a risk register, documented ethical decision logic, audit trails, and a human oversight protocol. Winners Consulting Services Co. Ltd.'s experience with Taiwanese mid-market enterprises indicates that a foundational ISO 42001-aligned governance architecture can be established for an existing AI portfolio within 90 days, reaching certification-ready status within 6 months. The four-category taxonomy from this paper provides the systematic checklist that ensures no ethical risk category is overlooked during the retrospective assessment.
- Our company has no direct business with the EU. Does the EU AI Act still apply to us?
- In most cases, yes, through three channels. First, the EU AI Act applies extraterritorially to any AI system whose outputs are used within the EU—if your AI system's results reach EU-based users through any path, you are within scope. Second, Taiwan's AI Basic Law draft is architecturally modeled on the EU AI Act, meaning compliance with the EU Act is the most reliable way to ensure readiness for Taiwan's domestic regulations before they are finalized. Third, an increasing number of international enterprise clients—including Japanese and American multinational corporations—are now requiring AI compliance documentation from their Taiwanese supply chain partners as a condition of contract renewal, and ISO 42001 certification is the most internationally recognized form of that documentation.
- What does ISO 42001 certification actually require from a Taiwanese enterprise?
- ISO 42001 requires the establishment of a documented Artificial Intelligence Management System (AIMS) with six core elements: an AI governance policy and organizational objectives; an AI risk assessment procedure that identifies, analyzes, and evaluates risks from AI systems; documented ethical decision controls (aligned with the taxonomy categories in this paper); a human resource competency framework for AI-related roles; an internal audit program; and a management review and continuous improvement mechanism. Annex A provides 38 specific control measures covering AI system design, data governance, operational monitoring, and incident response. For Taiwanese enterprises pursuing both ISO 42001 certification and EU AI Act compliance simultaneously, Winners Consulting Services Co. Ltd. maps both requirements into a single integrated control framework, eliminating duplicated documentation effort.
- How long does it take and what are the realistic steps to build an AI governance framework from scratch?
- Based on Winners Consulting Services Co. Ltd.'s delivery experience with Taiwanese enterprises, the timeline varies by organizational complexity. Small and medium enterprises (fewer than 500 employees, 3 to 10 AI systems): 4 to 6 months to certification-ready status. Large enterprises or groups (500 or more employees, complex multi-system AI environments): 9 to 12 months. We recommend a four-phase approach: Phase 1 (30 days)—Current State Diagnostic and ISO 42001 Gap Analysis; Phase 2 (45 to 60 days)—AIMS Design and Documentation Development; Phase 3 (30 to 45 days)—Implementation and Internal Training; Phase 4 (30 days)—Trial Operation, Internal Audit, and Management Review. The complimentary diagnostic we offer covers Phase 1 in full, giving you a concrete gap analysis before you commit any project budget.
- Why engage Winners Consulting Services Co. Ltd. for AI governance?
- Winners Consulting Services Co. Ltd. (積穗科研股份有限公司) occupies a distinctive position in Taiwan's AI governance consulting landscape: we combine ISO management system certification expertise, substantive AI technical understanding, and active monitoring of international regulatory developments including the EU AI Act implementation timeline and Taiwan's AI Basic Law legislative progress. Our consultants track the academic research tradition represented by this paper—understanding the technical foundations of AI ethics, not just the compliance checklist—which means the governance architectures we design are structurally sound, not merely documentation-compliant. We
Was this article helpful?
Related Services & Further Reading
Want to apply these insights to your enterprise?
Get a Free Assessment