ai

Insight: A multilevel framework for AI governance

Published
Share
================================= ```html

Winners Consulting Services Co. Ltd. (積穗科研股份有限公司), Taiwan's expert in AI Governance, highlights a critical insight from a landmark 2023 research paper: ethical principles for AI will never translate into real-world practice without a structured, multilevel governance framework connecting governments, corporations, and citizens. For Taiwanese enterprises navigating ISO 42001 certification, EU AI Act compliance, and the requirements of Taiwan's AI Basic Act, this research provides the most academically rigorous justification yet for why governance architecture—not just policy statements—must be the foundation of every enterprise AI strategy.

Paper Citation: A multilevel framework for AI governance (Hyesun Choung, Prabu David, John S. Seberger, arXiv — AI Governance & Ethics, 2023)
Original Paper: http://arxiv.org/abs/2307.03198v2

Read Original Paper →

About the Authors and This Research

This paper is co-authored by three scholars whose combined expertise spans communication science, media studies, and information ethics. Hyesun Choung focuses on technology adoption and trust in AI-mediated communication, with an h-index of 11 and 879 cumulative citations—a researcher whose work consistently bridges behavioral theory and AI policy. Prabu David is a senior professor at Michigan State University's College of Communication Arts & Sciences, with an impressive h-index of 28 and over 3,471 cumulative citations, making him one of the most cited scholars in media effects and technology ethics in the world. John S. Seberger contributes expertise in information privacy and digital ethics policy, rounding out a team whose interdisciplinary lens gives this paper both empirical depth and practical policy relevance.

Published in 2023—the same year ISO 42001 was formally released and as the EU AI Act moved into its final legislative stages—this research appeared at precisely the moment global AI governance frameworks were crystallizing. Its timing amplifies its influence: it offers a theoretically grounded model at exactly the point when enterprises and regulators were searching for one.

The Core Problem: Why AI Ethics Statements Fail Without Governance Architecture

The central question this paper addresses is deceptively simple but profoundly important: why do so many AI ethical guidelines issued by governments, industry bodies, and corporations fail to produce meaningful change in how AI systems are actually developed and deployed? The researchers argue that the missing link is a mediating governance structure—a multilevel framework that creates accountability, communication, and enforcement pathways between the abstract level of ethical principles and the concrete level of daily AI operations.

The proposed framework organizes AI governance into three interdependent stakeholder levels: Government, responsible for regulatory frameworks and public accountability; Corporations, responsible for implementing governance mechanisms within their organizational boundaries; and Citizens, whose trust, consent, and feedback create the demand-side accountability that keeps the other two levels honest. The framework then evaluates the relationships between these levels through three dimensions of trust: Competence (does this actor have the capability to govern AI responsibly?), Integrity (does this actor follow consistent, transparent principles?), and Benevolence (does this actor genuinely prioritize user and societal wellbeing?).

Core Finding 1: Single-Level Governance Always Fails

One of the paper's most operationally significant findings is that AI governance designed to function at only one level is structurally incapable of achieving its stated goals. Government regulation without corporate implementation mechanisms creates compliance theater. Corporate AI ethics policies without citizen accountability channels produce self-serving governance that protects enterprise interests more than user rights. Citizen advocacy without regulatory backing lacks enforcement power. The research demonstrates that effective AI governance requires all three levels to be simultaneously active, mutually reinforcing, and structurally connected. This finding directly validates the architectural logic of ISO 42001, which requires organizations to assess and manage AI risks in relation to both their internal organizational context and their external regulatory and stakeholder environment—a recognition that no governance system is an island.

Core Finding 2: Trust Dimensions Provide a Diagnostic Tool for Governance Quality

The paper's second major contribution is the use of trust theory—specifically the dimensions of competence, integrity, and benevolence—as a diagnostic lens for evaluating governance quality at each level. This is significant for practitioners because it transforms an abstract governance framework into a measurable assessment tool. An organization can ask: Do our AI teams have the technical competence to identify and mitigate AI risks (competence)? Does our AI decision-making process follow documented, auditable principles (integrity)? Are our AI systems designed with the genuine wellbeing of users as a primary design criterion, not an afterthought (benevolence)? These three questions map directly onto the risk assessment, documentation, and human oversight requirements embedded in both ISO 42001 and the EU AI Act's obligations for high-risk AI systems.

What This Means for Taiwanese Enterprises Right Now

Taiwan's AI governance landscape entered a new phase in 2024 with the passage of Taiwan's AI Basic Act (人工智慧基本法), which establishes foundational obligations for enterprises deploying AI systems, including risk assessment requirements, transparency obligations, and accountability mechanisms. Simultaneously, the EU AI Act—which entered into force in August 2024 and will apply its high-risk AI system requirements from August 2026—extends its jurisdiction to any AI system whose outputs are used within the EU, regardless of where the system's developer or deployer is headquartered. This means Taiwanese manufacturers, financial service providers, and technology companies with EU-facing operations are already within the EU AI Act's regulatory scope.

The multilevel governance model proposed in this paper offers Taiwanese enterprises a strategic clarity that individual regulatory frameworks cannot provide alone. ISO 42001 addresses the corporate governance level with precision, providing a certifiable management system structure. The EU AI Act addresses the government regulatory level, setting mandatory risk classification and compliance obligations. Taiwan's AI Basic Act anchors the domestic regulatory context. What the research adds is the insight that these three frameworks should not be implemented as separate compliance projects—they should be integrated into a coherent three-level governance architecture where regulatory requirements, organizational mechanisms, and stakeholder communication systems reinforce each other.

For Taiwanese enterprises with limited governance resources, this means prioritizing AI risk classification first: identifying which AI systems fall into the EU AI Act's high-risk categories (including AI used in HR decisions, credit scoring, safety-critical systems, and law enforcement), conducting the risk assessments required by ISO 42001 Clause 6, and establishing the transparency and redress mechanisms that satisfy both citizen-level trust requirements and Taiwan AI Basic Act obligations—all within a unified governance architecture rather than three separate compliance silos.

How Winners Consulting Services Co. Ltd. Helps Taiwanese Enterprises Build Multilevel AI Governance

Winners Consulting Services Co. Ltd. (積穗科研股份有限公司) helps Taiwanese enterprises establish AI management systems compliant with ISO 42001 and the EU AI Act, conduct structured AI risk classification assessments, and ensure AI deployments meet the requirements of Taiwan's AI Basic Act. Our approach is directly informed by the multilevel governance framework described in this paper—we treat governance as a structural challenge requiring simultaneous action at the regulatory, organizational, and stakeholder trust levels, not a documentation exercise.

  1. Multilevel Governance Gap Analysis: We apply the paper's three-level framework (Government / Corporation / Citizen) and three trust dimensions (Competence / Integrity / Benevolence) as a diagnostic structure to assess your organization's current AI governance posture. This gap analysis is mapped against the specific requirements of ISO 42001 Clauses 4 through 10, producing a prioritized remediation roadmap that addresses structural governance gaps, not just documentation deficiencies.
  2. AI Risk Classification and Documentation: We lead your team through a systematic AI inventory and risk classification exercise aligned with the EU AI Act's four-tier risk hierarchy (Unacceptable Risk, High Risk, Limited Risk, Minimal Risk) and Taiwan's AI Basic Act risk management obligations. For each identified AI system, we develop the risk treatment plans, operational controls, and audit trails required by ISO 42001 Clause 8, ensuring your documentation is audit-ready for both regulatory inspections and third-party certification bodies.
  3. Stakeholder Trust Infrastructure: Drawing on the paper's emphasis on citizen-level trust as a non-negotiable governance layer, we help you design AI transparency disclosures, user-facing explanation mechanisms, complaint and redress channels, and board-level AI governance reporting processes—ensuring your governance system satisfies the benevolence and integrity dimensions of trust that regulators, customers, and civil society increasingly demand.

Winners Consulting Services Co. Ltd. offers a complimentary AI Governance Mechanism Diagnostic to help Taiwanese enterprises establish an ISO 42001-aligned management system within 90 days.

Apply for Your Free Diagnostic →

Frequently Asked Questions

Our company already has an AI ethics policy. Why isn't that sufficient for AI governance?
An AI ethics policy is a statement of intent, not a governance mechanism. This paper's central finding is that ethical principles only translate into practice when supported by a structured, multilevel governance framework with clear accountability at the government, corporate, and citizen levels. Without operational mechanisms—documented processes, designated responsibilities, monitoring systems, and stakeholder communication channels—ethics policies function as marketing materials rather than governance tools. ISO 42001 specifically requires organizations to establish operational controls (Clause 8) and performance evaluation processes (Clause 9) that transform policy intent into verifiable practice. If your ethics policy does not have corresponding operational procedures, audit trails, and accountability assignments, it is not yet a governance system.
Does the EU AI Act apply to Taiwanese companies that don't have operations in Europe?
Yes, in many cases it does. The EU AI Act applies extraterritorially to AI systems whose outputs are used within the EU, regardless of where the system's provider or deployer is based. This means a Taiwanese SaaS provider whose product is used by EU-based customers, or a Taiwanese manufacturer whose AI-powered quality control system outputs are reviewed by EU-based clients, may fall within the Act's scope. The Act's high-risk AI system requirements—including mandatory risk assessments, technical documentation, human oversight mechanisms, and post-market monitoring—apply from August 2026. Taiwanese companies should conduct an EU AI Act applicability assessment now, rather than waiting until enforcement begins.
What does ISO 42001 certification actually require for a Taiwanese enterprise?
ISO 42001, published in 2023, is the world's first international standard for AI management systems. It requires organizations to establish, implement, maintain, and continually improve an AI management system (AIMS) covering the full lifecycle of AI development and deployment. Key requirements include: Clause 4 (organizational context, including identification of AI-related risks and opportunities and mapping of internal and external stakeholder expectations); Clause 6 (AI risk assessment and treatment planning); Clause 8 (AI system operational controls, including data governance, model validation, and human oversight); and Clause 9 (performance evaluation, including internal audits and management review). ISO 42001 is designed to be compatible with EU AI Act compliance requirements and Taiwan's AI Basic Act obligations, making it the most efficient single framework for enterprises seeking to satisfy multiple regulatory demands simultaneously.
How long does it take to implement an AI governance system, and what are the steps?
For most Taiwanese enterprises, a structured ISO 42001-aligned AI governance implementation takes between 90 and 180 days, depending on organizational size and existing management system maturity. The Winners Consulting Services Co. Ltd. implementation methodology consists of four phases: Phase 1 (2–4 weeks): Current state assessment and gap analysis against ISO 42001 requirements; Phase 2 (4–8 weeks): Governance mechanism design, including policy documentation, risk assessment procedures, and role assignments; Phase 3 (4–8 weeks): Pilot implementation, staff training, and operational control deployment; Phase 4 (2–4 weeks): Internal audit, management review, and certification readiness assessment. Organizations with existing ISO 9001 or ISO 27001 management systems typically reduce implementation time by approximately 30% due to transferable documentation and process structures. External certification audit scheduling adds 1–3 months after internal readiness is confirmed.
Why should we choose Winners Consulting Services Co. Ltd. for AI governance advisory?
Winners Consulting Services Co. Ltd. (積穗科研股份有限公司) is one of the few consulting firms in Taiwan with integrated expertise spanning ISO 42001 management system implementation, EU AI Act compliance advisory, and Taiwan AI Basic Act policy interpretation. Our advisory methodology is grounded in peer-reviewed research—including the multilevel governance framework described in this paper—ensuring that our recommendations are not only compliant with current regulations but structurally sound against future regulatory evolution. We have supported Taiwanese enterprises across the manufacturing, financial services, and technology sectors in establishing verifiable AI governance systems, and our 90-day implementation program is specifically designed to help organizations achieve compliance readiness without disrupting ongoing operations. We provide not just frameworks and documents, but operational governance infrastructure that stands up to regulatory scrutiny and builds genuine stakeholder trust.