ai

Insight: AI Governance and Ethics Framework for Sustainable AI and Su

Published
Share

Winners Consulting Services Co. Ltd. (積穗科研股份有限公司), Taiwan's expert in AI Governance, highlights a critical insight for enterprise leaders: as AI systems gain autonomous decision-making capabilities, technology alone can no longer guarantee organizational safety — an integrated ethics and governance framework is the true foundation of sustainable AI deployment. A 2022 research paper published on arXiv by Mahendra Samarawickrama identifies six systemic AI risk categories threatening human society and argues that only by embedding human ethics at the core of AI governance architecture — supported by ISO 42001 certification, EU AI Act compliance, and alignment with Taiwan's AI Basic Law — can organizations achieve truly sustainable AI outcomes.

Paper Citation: AI Governance and Ethics Framework for Sustainable AI and Sustainability(Mahendra Samarawickrama, arXiv — AI Governance & Ethics, 2022)
Original Paper: http://arxiv.org/abs/2210.08984v1

Read Original Paper →

About the Author and This Research

Mahendra Samarawickrama is a researcher specializing in AI ethics and governance, whose 2022 paper published on arXiv represents a significant contribution to systematic thinking about AI governance frameworks. arXiv, maintained by Cornell University, is the world's leading open-access preprint platform for cutting-edge research across AI, machine learning, physics, and computer science, hosting over 200,000 new papers annually and serving as the first port of call for global AI research communities.

What distinguishes Samarawickrama's work from purely technical AI papers is its positioning of AI governance at the intersection of human ethics and social sustainability — directly echoing the spirit of the United Nations Sustainable Development Goals (SDGs). This research anticipated the global AI compliance wave that followed the EU AI Act's official entry into force in 2024, and its analytical framework across Diversity, Equity, and Inclusion (DEI) dimensions provided important intellectual groundwork that resonates strongly with the principles later codified in ISO 42001:2023, the world's first international standard for AI management systems.

Seven AI Risks and an Ethics-First Governance Framework: Core Findings No Enterprise Can Ignore

The most important contribution of this research is that it pulls AI governance down from abstract technical discussion to the level of concrete, actionable ethics framework design. Samarawickrama identifies a cluster of systemic risks in contemporary AI deployment and argues compellingly that governance frameworks must be rooted in human ethics — not merely compliance checklists.

Core Finding 1: AI Risks Form a Multi-Dimensional Threat Matrix — Single-Point Defense Is Insufficient

The research explicitly enumerates six major emerging risk categories AI poses to human society: Autonomous Weapons, Automation-spurred Job Loss, Socio-economic Inequality, Bias Caused by Data and Algorithms, Privacy Violations, and Deepfakes. These six risk categories are not independent — they are deeply interconnected and mutually reinforcing, forming a dynamic threat matrix. This means that organizations focusing only on one risk dimension (for example, implementing data privacy protection while ignoring algorithmic bias and social equity concerns) are operating with a fundamentally incomplete AI governance architecture. This finding directly supports the rationale behind the EU AI Act's adoption of a Risk-based Approach to AI regulation — different AI applications carry vastly different risk profiles, and governance resources must be dynamically allocated accordingly. Under the EU AI Act, violations involving high-risk AI systems can result in fines of up to 7% of global annual turnover.

Core Finding 2: Diversity, Equity and Inclusion (DEI) Are Critical Success Variables for AI Governance

Samarawickrama argues that Social Diversity, Equity, and Inclusion are not merely corporate social responsibility considerations — they are the core success factors for reducing AI risk, generating genuine business value, and advancing social justice. When AI training data lacks representational diversity or development teams lack diverse perspectives, systemic bias is amplified across every decision cycle the system executes. The research further emphasizes that cross-organizational partnerships and collaborations are more critical than ever for ensuring equitable access to distributed data, people, and capabilities. For Taiwan enterprises, the implication is clear: AI governance cannot remain solely the domain of the IT department — it must become a systemic, cross-departmental, and even cross-industry initiative.

Core Finding 3: AI Governance Frameworks Must Be Grounded in Human Ethics, Not Just Regulatory Compliance

The most forward-looking insight of this research is that the fundamental solution to AI ethics and governance challenges lies in establishing Human Ethics as the foundational layer underlying all AI system design, deployment, and oversight. Regulations such as the EU AI Act and standards such as ISO 42001 represent minimum requirements. Truly sustainable AI deployment requires organizations to proactively exceed regulatory baselines at the ethical level. This is highly consistent with ISO 42001:2023's emphasis on building a "responsible AI management culture" rather than merely achieving "documentation compliance."

Three Strategic Implications for Taiwan's AI Governance Practice: Where ISO 42001, EU AI Act, and Taiwan's AI Basic Law Converge

Taiwan enterprises stand at a historic inflection point in AI governance. The EU AI Act entered into force in 2024 and will be phased in through 2026. Any Taiwan enterprise with business relationships in the EU market — regardless of where they sit in the supply chain — faces direct compliance pressure. Meanwhile, Taiwan's AI Basic Law (人工智慧基本法) continues to advance through the legislative process, expected to establish a foundational regulatory framework for domestic AI applications. Against this backdrop, the governance principles revealed by Samarawickrama's research carry three layers of direct strategic significance for Taiwan enterprises.

First Implication: AI Risk Classification Is the Starting Point of Compliance Readiness, Not the Endpoint. The EU AI Act classifies AI systems into four risk tiers: Unacceptable Risk, High Risk, Limited Risk, and Minimal Risk. Violations of the highest-risk provisions carry fines of up to 7% of global annual turnover. Taiwan enterprises must immediately inventory all existing AI applications, complete risk classification assessments, and develop effective compliance roadmaps. ISO 42001 provides the systematic AI management system framework that serves as the best bridge for Taiwan enterprises to align with EU AI Act requirements.

Second Implication: Algorithmic Bias and Data Governance Are the Most Overlooked Hidden Risks in Taiwan's Industries. The data bias problems identified in the research are particularly relevant for AI applications in Taiwan's manufacturing and financial services sectors. When training datasets lack representational diversity, AI system decisions will systematically disadvantage specific groups or scenarios — generating not only business losses but potential violations of Taiwan's Personal Data Protection Act and future AI Basic Law provisions.

Third Implication: Cross-Departmental AI Governance Committees Are the Most Missing Organizational Mechanism in Taiwan Enterprises. The cross-organizational partnerships emphasized in the research translate at the enterprise level to cross-departmental AI Governance Committees. ISO 42001 explicitly requires Top Management to demonstrate leadership commitment and establish clear role and responsibility assignments. Many Taiwan enterprises' AI applications remain in technology experimentation phases led by IT departments, lacking high-level governance structures — this is the systemic gap most urgently requiring attention.

How Winners Consulting Services Co. Ltd. Helps Taiwan Enterprises Build Sustainable AI Governance

Winners Consulting Services Co. Ltd. (積穗科研股份有限公司) helps Taiwan enterprises build AI management systems that meet the requirements of both ISO 42001 and the EU AI Act, conducts AI risk classification assessments, and ensures AI applications comply with Taiwan's AI Basic Law. We translate the ethical governance framework principles from Samarawickrama's research into immediately actionable implementation plans for Taiwan enterprises.

  1. AI Risk Inventory and Classification Assessment (Corresponding to Core Finding 1): Using the EU AI Act's four-tier risk classification framework, we help enterprises systematically inventory all current and planned AI applications, establish an AI Risk Register, and benchmark against ISO 42001 Clause 6 (Risks and Opportunities Assessment) requirements to develop priority action lists — ensuring high-risk AI systems complete necessary adjustments before the 2025 compliance deadline.
  2. Cross-Departmental AI Governance Committee Establishment (Corresponding to Core Findings 2 and 3): We help enterprises design the organizational structure, role definitions, and decision-making processes for AI Governance Committees, ensuring Top Management leadership commitment meets ISO 42001 Clause 5 requirements and integrating DEI principles into AI system development and procurement evaluation criteria to prevent algorithmic bias at the organizational source.
  3. AI Ethics Policy and Management Documentation System: We help enterprises develop AI Ethics Policies, AI Usage Guidelines, and monitoring mechanisms that

Was this article helpful?

Share

Related Services & Further Reading

Want to apply these insights to your enterprise?

Get a Free Assessment