Winners Consulting Services Co. Ltd. (積穗科研股份有限公司), Taiwan's expert in AI Governance, alerts business executives to a critical blind spot exposed by rigorous academic research: a 2023 study interviewing ten Finnish software engineering executives found that nearly all AI ethics requirements were reduced to mere privacy and data protection compliance, leaving transparency, fairness, human oversight, and societal well-being entirely unaddressed in management practice — the exact governance gaps that will surface during ISO 42001 audits and EU AI Act compliance reviews.
Paper Citation: Implementing AI Ethics: Making Sense of the Ethical Requirements(Mamia Agbese、Rahul Mohanani、Arif Ali Khan,arXiv — AI Governance & Ethics,2023)
Original Paper: http://arxiv.org/abs/2306.06749v1
About the Authors and This Research
This paper was co-authored by three researchers whose combined academic footprint gives this study substantial credibility in the AI governance space. Mamia Agbese holds an h-index of 5 with 123 total citations, focusing on the practical engineering of AI ethics requirements. Rahul Mohanani has an h-index of 6 and 268 cumulative citations, with a strong reputation in software engineering management and Agile methodologies. Arif Ali Khan is a cross-disciplinary scholar bridging AI ethics and software engineering governance. All three authors are rooted in Finnish academic institutions, and their work was published on arXiv — AI Governance & Ethics, making it openly accessible to practitioners and researchers worldwide.
What makes this study particularly valuable is its research design: rather than producing another prescriptive ethics framework, the authors conducted qualitative interviews with ten Finnish software engineering executives at middle and senior management levels. This grounds the findings in real organizational behavior, not theoretical ideals. The researchers used the European Union's Ethics Guidelines for Trustworthy AI as the reference benchmark for ethical requirements and applied an Agile Portfolio Management Framework to analyze how those requirements were actually implemented — or ignored — in practice.
The Core Insight: AI Ethics Is Being Reduced to a Legal Checkbox
The central finding of this research is as sobering as it is actionable: across ten executive interviews, the overwhelming pattern was that AI ethics requirements were being interpreted and managed exclusively as legal compliance obligations — specifically privacy and data governance — rather than as a comprehensive governance commitment spanning multiple ethical dimensions. The EU's Ethics Guidelines for Trustworthy AI identify seven key requirements: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity and non-discrimination, societal and environmental well-being, and accountability. In the organizations studied, only one of these seven — privacy and data governance — received consistent management attention, and primarily through a legal compliance lens.
Finding One: The "Ethics Equals Privacy" Trap Is Nearly Universal
All interviewed executives conflated AI ethics with data privacy regulation. This is not simply an oversight — it reflects a structural gap in how middle and senior managers are trained and incentivized to think about AI. When ethics is framed exclusively as a legal risk, requirements such as fairness, transparency, and human oversight receive no dedicated management resources, no risk ratings, and no ownership at the leadership level. For Taiwanese companies pursuing ISO 42001 certification or preparing for EU AI Act compliance, this pattern predicts exactly where audit findings will emerge: in the underdeveloped dimensions of human oversight mechanisms, algorithmic transparency documentation, and non-discrimination impact assessments.
Finding Two: Only Two Ethical Dimensions Had Practical Implementation Pathways
Beyond privacy compliance, the research identified only two other ethical requirement categories that had any practical implementation pathway in the organizations studied. Technical robustness and safety requirements were being managed as risk requirements within existing risk management frameworks. Societal and environmental well-being requirements were being partially addressed through sustainability initiatives. This finding points to a critical insight for governance practitioners: ethical requirements only gain traction when translated into organizational languages that management already speaks — risk ratings, sustainability KPIs, or compliance checkboxes. Abstract ethical principles, left untranslated, remain aspirational statements in governance documents.
Finding Three: The Ethical Risk Requirements Stack Offers a Practical Bridge
The paper's most immediately actionable contribution is the concept of an "Ethical Risk Requirements Stack" — a structured approach to mapping abstract ethical principles onto concrete management requirements using an Agile Portfolio Management Framework. This is particularly relevant for Taiwanese technology companies already operating in Agile environments: rather than building a parallel ethics governance structure, the framework proposes embedding ethical requirements into existing product development backlogs, risk registers, and portfolio management cycles. This integration approach directly addresses the implementation gap identified in the research.
Implications for Taiwan's AI Governance Practice
Taiwan's corporate AI governance landscape is under simultaneous pressure from three converging regulatory frameworks. ISO 42001, formally published in 2023, establishes the international standard for AI Management Systems (AIMS) and provides a certifiable framework covering risk assessment, ethical impact evaluation, human oversight mechanisms, and AI supply chain governance. The EU AI Act, which entered into force in 2024, introduces a four-tier risk classification system — unacceptable risk, high risk, limited risk, and minimal risk — with specific compliance obligations for high-risk AI systems including technical documentation, conformity assessment, and human oversight requirements. Taiwan's draft AI Fundamental Act is advancing through legislative review, establishing the foundational governance principles and regulatory responsibilities for AI in both public and private sectors.
The research findings map directly onto the governance failures most likely to affect Taiwanese companies. First, organizations that have limited their AI governance to personal data protection policies will face significant gaps in ISO 42001 audits, which require evidence of governance across all seven EU Trustworthy AI dimensions. Second, Taiwanese exporters whose AI-embedded products reach EU markets — including smart manufacturing equipment, healthcare AI, and recruitment systems — face direct EU AI Act obligations regardless of where the company is headquartered. Third, the draft Taiwan AI Fundamental Act's emphasis on "human-centric AI" aligns closely with ISO 42001's human oversight requirements, meaning companies that build ISO 42001-compliant systems now are simultaneously preparing for domestic regulatory compliance.
The paper's finding that ethical requirements only become actionable when translated into organizational management languages — risk requirements, sustainability requirements, or technical specifications — provides a direct methodology for Taiwanese executives. Rather than asking "how do we become more ethical," the productive question becomes: "how do we embed EU Trustworthy AI requirements into our existing risk management, supplier governance, and product development frameworks?"
How Winners Consulting Services Co. Ltd. Helps Taiwan Enterprises Bridge the Gap
Winners Consulting Services Co. Ltd. (積穗科研股份有限公司) provides end-to-end AI governance advisory services designed to close exactly the gaps this research identifies. We help Taiwan enterprises build ISO 42001-compliant AI Management Systems, conduct AI risk classification assessments aligned with EU AI Act requirements, and embed ethical AI requirements into existing management processes — ensuring that AI governance becomes operational practice rather than a document-only exercise.
- AI Ethics Requirements Inventory and Risk Classification: We conduct a structured assessment of all AI applications within the organization, mapping each against ISO 42001 requirements and the EU AI Act's four-tier risk classification. This produces a prioritized AI governance register that allocates compliance resources to the highest-risk systems first, aligned with the research finding that risk framing is the most effective pathway for ethics implementation.
- Embedding Ethical Requirements into Existing Management Processes: We translate the seven EU Trustworthy AI dimensions into management-ready formats: risk register entries, sustainability KPIs, supplier governance criteria, and product development acceptance criteria. This approach directly applies the paper's core insight — ethical requirements gain traction only when they speak the language of operational management.
- Executive AI Governance Capability Building: We design and deliver AI governance workshops specifically for middle and senior management, building the leadership capacity to own ethical decision-making, risk accountability, and ISO 42001 compliance reviews. This directly addresses the management-level governance vacuum the research identified as the primary barrier to ethics implementation.
Winners Consulting Services Co. Ltd. offers a complimentary AI Governance Mechanism Diagnostic, helping Taiwan enterprises establish an ISO 42001-compliant management system within 90 days.
Apply for Your Free Diagnostic →Frequently Asked Questions
- Our company already has a data privacy policy. Why do we need a separate AI ethics governance framework?
- A data privacy policy addresses only one of the seven ethical requirement dimensions defined by the EU's Ethics Guidelines for Trustworthy AI — and this research shows that equating AI ethics with privacy compliance is precisely the most common management error. ISO 42001 requires a full AI Management System covering technical robustness, transparency, human oversight, fairness, and societal impact — areas entirely outside the scope of a typical privacy policy. The EU AI Act adds conformity assessment, technical documentation, and post-market monitoring obligations for high-risk AI systems. Taiwan's draft AI Fundamental Act similarly requires comprehensive governance mechanisms. A privacy policy alone will not satisfy these requirements, and the gap will be visible in any ISO 42001 audit or EU AI Act regulatory review.
- Our company does not operate in the EU. Do we still need to comply with the EU AI Act?
- Yes, in most cases. The EU AI Act applies a market-access principle: any AI system placed on the EU market or whose outputs affect EU-based users falls within its scope, regardless of where the developer or deployer is located. Taiwanese manufacturers whose products embed AI functionality — including industrial automation, medical devices, HR systems, and consumer technology — must comply with applicable EU AI Act obligations when those products enter the EU market. Additionally, many multinational corporations require their Taiwanese suppliers to provide AI governance declarations as a supply chain condition. Building an ISO 42001-compliant AI Management System is the most effective way to demonstrate readiness for both EU AI Act requirements and supply chain governance expectations.
- What core documentation does ISO 42001 certification require?
- ISO 42001, published in 2023, is the international standard for AI Management Systems. Certification requires documented evidence of: an organizational AI policy statement; a complete AI system inventory with associated risk assessments; ethical impact assessment records; human oversight mechanism design documentation; AI supplier management procedures; incident response and continuous improvement processes. These requirements overlap significantly with the EU AI Act's technical documentation obligations for high-risk AI systems and align with Taiwan's draft AI Fundamental Act governance principles. Winners Consulting recommends beginning with a gap analysis that maps existing management practices against ISO 42001 clauses, identifying priority areas for documentation and process development before pursuing formal certification.
- How long does it take to build an AI governance system, and what are the steps?
- Based on Winners Consulting's project experience, most Taiwanese mid-to-large enterprises require between 90 and 180 days to build an ISO 42001-compliant AI Management System from baseline. The process follows four phases: Phase 1 (Days 1–30): Current-state diagnostic and gap analysis — inventory all AI applications and map against ISO 42001 requirements. Phase 2 (Days 31–60): Governance mechanism design — develop risk assessment frameworks, ethical impact assessment templates, and governance policy documentation. Phase 3 (Days 61–120): System implementation and staff training — embed governance mechanisms into existing workflows, conduct executive and operational training. Phase 4 (Days 121–180): Internal audit, management review, and third-party certification preparation. Organizations with existing ISO 27001 or ISO 9001 certification can typically reduce this timeline by 30–40% due to existing process and documentation infrastructure.
- Why choose Winners Consulting Services Co. Ltd. for AI governance advisory?
- Winners Consulting Services Co. Ltd. (積穗科研股份有限公司) combines deep expertise in ISO 42001, EU AI Act, and Taiwan's draft AI Fundamental Act with practical implementation experience in Taiwan's business environment. Our key advantages: First, we integrate all three regulatory frameworks into a single AI Management System design, preventing the redundant compliance work that results from treating each
Was this article helpful?
Related Services & Further Reading
Want to apply these insights to your enterprise?
Get a Free Assessment