Winners Consulting Services Co. Ltd. (積穗科研股份有限公司), Taiwan's expert in AI Governance, believes that the most critical barrier facing Taiwanese enterprises today is not a lack of AI capability — it is the persistent gap between declaring AI ethics principles and actually embedding them into organizational processes. A landmark 2022 academic paper by Finnish researchers at the University of Turku introduces the Hourglass Model of Organizational AI Governance, the first comprehensive framework to simultaneously address the environmental, organizational, and AI system lifecycle layers of governance — offering Taiwanese enterprises a concrete, actionable blueprint to align with ISO 42001, the EU AI Act, and Taiwan's forthcoming AI Basic Law.
Paper Citation: Putting AI Ethics into Practice: The Hourglass Model of Organizational AI Governance (Matti Mäntymäki, Matti Minkkinen, Teemu Birkstedt, arXiv — AI Governance & Ethics, 2022)
Original Paper: http://arxiv.org/abs/2206.00335v2
About the Authors and This Research
The Hourglass Model was developed by three Finnish researchers affiliated with the University of Turku. Lead author Matti Mäntymäki is a senior professor in information systems and digital management, with an h-index of 44 and over 6,958 cumulative citations — placing him firmly among the top-tier scholars in AI governance and organizational IS research globally. Co-authors Matti Minkkinen and Teemu Birkstedt bring complementary expertise in AI ethics policy and organizational governance, creating a research team uniquely positioned at the intersection of academic rigor and practical applicability.
The paper was published in 2022 during a pivotal moment: the EU AI Act was under intense parliamentary debate, and ISO/IEC 42001 was being actively developed toward its 2023 publication. This contextual alignment gives the Hourglass Model a rare quality — it is simultaneously an academic contribution and a forward-looking compliance architecture that anticipated the regulatory landscape enterprises now face in 2025 and beyond.
The Core Problem: Why Dozens of AI Ethics Principles Have Failed to Change Organizational Behavior
The research begins from a deceptively simple observation: despite the proliferation of AI ethics guidelines — with major technology companies, governments, and international organizations having published scores of principle documents since 2016 — organizational AI development practices have not fundamentally changed. Bias, discrimination, opacity, and accountability gaps persist. The researchers ask why, and their answer is structural: existing governance models suffer from a critical architectural flaw. They tend to operate at either the macro level (regulatory frameworks, societal norms) or the micro level (technical specifications for individual AI systems), but they systematically neglect the organizational middle layer — the internal policies, accountability structures, governance roles, and management processes that must translate external requirements into internal practice.
The Hourglass Model's name is itself the argument: like an hourglass, the governance architecture is wide at the top (the external regulatory and social environment), narrows at the organizational core (the internal governance mechanisms), and widens again at the bottom (the diverse technical requirements of individual AI systems). The narrow middle — the organizational layer — is the critical chokepoint through which all governance must pass. If this layer is weak or absent, ethics principles remain at the level of corporate communications rather than operational reality.
Core Finding 1: AI Governance Requires Three Synchronized Layers — Not One, Not Two, but All Three
The Hourglass Model establishes that effective organizational AI governance must simultaneously address three interdependent layers. The Environmental Layer encompasses external regulatory requirements — most critically the EU AI Act's four-tier risk classification system (unacceptable risk, high risk, limited risk, minimal risk), ISO 42001's management system requirements, national AI legislation such as Taiwan's AI Basic Law, and broader societal expectations regarding responsible AI. The Organizational Layer covers internal governance structures: AI ethics policies, board-level oversight, dedicated AI governance roles (such as an AI Ethics Officer or AI Risk Committee), internal audit processes, accountability matrices, incident response procedures, and stakeholder engagement mechanisms. The AI System Layer addresses the governance requirements specific to individual AI systems throughout their complete lifecycle — from initial needs assessment and data collection through model development, testing, deployment, monitoring, and eventual decommissioning.
The researchers' critical insight is that these three layers must be designed and operated as an integrated system. An organization with robust technical safety specifications but no internal accountability structure has a governance gap. An organization with an eloquent AI ethics policy but no lifecycle checkpoints has a governance gap. Only when all three layers operate in concert does meaningful AI governance become possible.
Core Finding 2: Lifecycle Governance Is the Most Overlooked — and Most Consequential — Dimension
Perhaps the most practically impactful finding concerns the AI system lifecycle. The research demonstrates that most organizational governance failures do not occur at the deployment stage — they are seeded much earlier, during data collection (where biases are introduced), requirements definition (where harmful use cases are not identified), and model design (where transparency trade-offs are made without adequate scrutiny). By the time an AI system reaches deployment, these early-stage governance failures have often become structural features of the system, extremely difficult and costly to remediate.
The Hourglass Model therefore establishes a foundational design principle: governance requirements must be embedded at every milestone of the AI development lifecycle, not applied as a post-hoc review. This means governance checkpoints at the data sourcing stage, the model architecture stage, the testing and validation stage, and the deployment and monitoring stage. This lifecycle-integrated approach is directly aligned with ISO 42001 Clause 8 (Operation), which requires organizations to plan, implement, and control processes for responsible AI development, and with the EU AI Act's requirements for high-risk AI systems to maintain technical documentation and implement risk management systems throughout the system's lifecycle — not merely at the point of market placement.
Implications for Taiwan AI Governance: Three Regulatory Pressures Are Converging Right Now
Taiwanese enterprises are facing an unprecedented convergence of regulatory requirements that makes the Hourglass Model's insights immediately actionable rather than academically interesting. Three forces are accelerating simultaneously. First, the EU AI Act entered into force in August 2024, with the prohibitions on unacceptable-risk AI systems applying from February 2025 and the full compliance requirements for high-risk AI systems applying from August 2026. Any Taiwanese enterprise with products, services, or supply chain relationships touching the EU market — which encompasses a significant portion of Taiwan's export-oriented economy — cannot treat EU AI Act compliance as a distant concern. Second, ISO/IEC 42001, published in December 2023, has rapidly become the international benchmark that enterprise clients, government procurement bodies, and multinational partners use to assess supplier AI governance maturity. Third, Taiwan's AI Basic Law continues to advance through the legislative process, with its passage expected to establish the domestic regulatory baseline for AI applications across all sectors.
The practical implication is urgent: building a comprehensive AI governance system aligned with ISO 42001 and structured according to the Hourglass Model's three-layer architecture typically requires 6 to 18 months depending on organizational complexity. Enterprises that wait until customers or regulators formally demand compliance documentation will find themselves unable to meet timelines. The window for proactive governance investment — where organizations can build capability before facing adversarial scrutiny — is closing.
The Hourglass Model provides a structural answer to each of these three pressures: the Environmental Layer maps to the regulatory requirements of the EU AI Act, ISO 42001, and Taiwan's AI Basic Law; the Organizational Layer maps to the internal management system requirements of ISO 42001 Clauses 4 through 7; and the AI System Layer maps to the technical and lifecycle requirements of ISO 42001 Clause 8 and the EU AI Act's conformity assessment procedures for high-risk AI systems.
How Winners Consulting Services Co. Ltd. Translates the Hourglass Model into ISO 42001 Certification Readiness
積穗科研股份有限公司(Winners Consulting Services Co. Ltd.)helps Taiwanese enterprises build AI management systems compliant with ISO 42001 and the EU AI Act, conduct structured AI risk classification assessments, and ensure that AI applications meet the principles of Taiwan's AI Basic Law. Our consulting methodology uses the Hourglass Model as the diagnostic and design framework: we begin with an environmental layer analysis of applicable regulations, proceed to organizational layer system design, and conclude with AI system layer lifecycle integration.
- AI Governance Gap Assessment: Using the Hourglass Model's three-layer structure as the diagnostic framework, we systematically assess the organization's existing AI applications and governance mechanisms against ISO 42001 clauses (Clauses 4–10) and EU AI Act risk classification criteria. The output is a prioritized gap report with specific remediation recommendations tied to each identified deficiency.
- Organizational Governance System Design: We design and document the organizational layer components required by ISO 42001: AI policy (Clause 5.2), AI governance roles and responsibilities (RACI matrices aligned with Clause 5.3), AI risk assessment and treatment procedures (Clause 6.1), AI system inventory and classification protocols, and AI ethics committee or oversight body charters — ensuring that governance accountability is structurally embedded rather than informally distributed.
- AI System Lifecycle Governance Integration: Drawing directly on the Hourglass Model's lifecycle governance principles, we embed governance checkpoints into the organization's existing AI development, procurement, and deployment workflows. This includes data governance checkpoints, model validation procedures, deployment approval gates, and post-deployment monitoring protocols — all generating the auditable documentation required for ISO 42001 third-party certification and EU AI Act conformity assessment.
Winners Consulting Services Co. Ltd. offers a complimentary AI Governance Readiness Assessment to help Taiwanese enterprises establish an ISO 42001-aligned management system within 90 days.
Request Your Free Assessment →Frequently Asked Questions
- How is the Hourglass Model different from the AI ethics frameworks our enterprise already has?
- The Hourglass Model's essential difference is structural completeness. Most enterprise AI ethics frameworks consist of principle statements — fairness, transparency, accountability, privacy — but lack the organizational mechanisms that give those principles operational effect. The Hourglass Model requires that governance operate simultaneously at the environmental layer (mapping applicable regulations such as the EU AI Act and ISO 42001), the organizational layer (establishing accountable roles, internal policies, and governance processes), and the AI system layer (embedding governance checkpoints throughout the system lifecycle). If your existing framework addresses principles without specifying who is accountable, through what process, at which lifecycle stage, and verifiable by what documentation — then the Hourglass Model provides precisely the structural complement your governance architecture needs.
- Does our Taiwanese enterprise need to comply with the EU AI Act even if we primarily sell in the Taiwan domestic market?
- The answer depends on your supply chain and technology partnerships, not just your direct customer geography. The EU AI Act has an extraterritorial scope similar to GDPR: it applies to AI systems placed on the EU market or affecting EU residents, regardless of where the AI provider is located. For many Taiwanese manufacturers and technology companies, AI components embedded in products or services sold to European end customers — even indirectly through OEM or ODM relationships — may fall within scope. Additionally, multinational enterprise clients increasingly apply EU AI Act compliance requirements to their global supplier networks as a contractual condition, meaning Taiwanese suppliers may face compliance demands from their B2B customers well before any direct regulatory exposure. ISO 42001 certification provides a compliance-adjacent framework that demonstrates AI governance maturity regardless of direct EU AI Act applicability.
- What does ISO 42001 actually require, and how does it connect to the EU AI Act and Taiwan's AI Basic Law?
- ISO 42001 is a management system standard — similar in structure to ISO 9001 for quality or ISO 27001 for information security — that specifies requirements for establishing, implementing, maintaining, and continually improving an AI management system (AIMS). Its core clauses address organizational context (Clause 4), leadership and AI policy (Clause 5), AI-specific
Was this article helpful?
Related Services & Further Reading
Want to apply these insights to your enterprise?
Get a Free Assessment