Winners Consulting Services Co. Ltd. (積穗科研股份有限公司), Taiwan's expert in AI Governance, warns that the EU AI Act's requirements for transparency, fairness, and accountability will not automatically translate into enforceable technical standards—and a landmark 2024 Oxford University study cited 76 times reveals why this gap is a direct risk for every Taiwanese enterprise with European market exposure. When standardisation bodies fail to resolve hard normative questions, the burden of defining "ethical AI" shifts silently to developers and deployers. Taiwanese companies that proactively establish ISO 42001-aligned AI management systems now will be positioned to meet EU AI Act high-risk provisions taking full effect in August 2026, while those that wait risk compliance gaps, market exclusion, and reputational harm.
Paper Citation: Three pathways for standardisation and ethical disclosure by default under the European union artificial intelligence act (Johann Laux, Sandra Wachter, Brent Mittelstadt, Computer Law & Security Review, 2024)
Original Paper: https://doi.org/10.1016/j.clsr.2024.105957
About the Authors and This Research
This paper represents some of the most influential AI governance scholarship produced by Oxford University's Internet Institute in 2024. Johann Laux is a researcher specialising in the institutional design of AI standardisation and regulatory frameworks, bringing a political economy lens to the otherwise technical debate around AI standards. Sandra Wachter is a Professor at the Oxford Internet Institute and one of the most widely cited voices globally on AI legal accountability and algorithmic fairness—her research has directly informed European Union policy documents and has been referenced in parliamentary proceedings across multiple jurisdictions. Brent Mittelstadt is a Senior Research Fellow at the Oxford Internet Institute whose work on AI ethics standards, data protection, and algorithmic accountability has shaped the international discourse on responsible AI development. Together, this Oxford team represents a formidable concentration of interdisciplinary expertise spanning law, computer science, and ethics. The paper was published in the Computer Law & Security Review, one of the leading journals in technology law, and has accumulated 76 citations since publication in 2024—including 2 high-impact citations—making it one of the most rapidly influential papers in the AI governance literature of its generation.
The EU AI Act's Standardisation Problem: Three Pathways and Why Two of Them Fail
The central insight of this paper is deceptively simple but strategically profound: the EU AI Act mandates that AI systems comply with abstract normative principles—transparency, fairness, accountability—but it does not specify how these principles should be operationalised into concrete technical standards. The authors ask: who will answer the hard questions about what "fair AI" actually means in practice, and do those answering have the democratic legitimacy to do so? Through rigorous institutional analysis, they identify three possible pathways and demonstrate why only one is fit for purpose.
Core Finding 1: European Standardisation Bodies Cannot Legitimately Define AI Ethics Unilaterally
The first pathway would have European Standardisation Organisations (SSOs) such as CEN and CENELEC directly resolve normative questions about AI fairness, transparency, and accountability. The authors identify a fundamental democratic deficit in this approach: standardisation is an inherently technical discourse that systematically excludes non-expert stakeholders and the general public. If SSOs become the de facto arbiters of AI ethics, decisions of profound public importance—such as which demographic groups deserve equal treatment from AI systems, or what counts as meaningful human oversight—would be made by unelected technical bodies without democratic mandate. This is incompatible with the EU AI Act's stated commitment to fundamental rights protection and participatory governance.
Core Finding 2: "Consensus Tracking" Creates a False Sense of Safety and Transfers Ethical Decisions to Enterprises
The second pathway relies on SSOs identifying and codifying existing normative consensus—essentially deriving AI standards from pre-existing social and ethical agreements. Through detailed historical analysis of one major European SSO's standardisation track record, the authors demonstrate that consensus tracking has been the preferred approach in practice. However, they expose three critical weaknesses: first, true normative consensus on hard AI ethics questions often does not yet exist; second, the process of claiming consensus can manufacture a false appearance of resolution while leaving the underlying problems unaddressed; and third—most critically for Taiwanese enterprises—unresolved normative questions are pushed down the institutional hierarchy to AI developers and deployers. The European Commission would essentially be outsourcing the definition of "ethical AI" to the very companies it is trying to regulate. For Taiwanese AI manufacturers and service providers supplying European markets, this means the burden of defining and documenting ethical AI decisions lands directly on your compliance team.
Core Finding 3: "Ethical Disclosure by Default" Is the Only Path That Balances Technical Operability with Democratic Legitimacy
The authors' proposed third pathway—Ethical Disclosure by Default—offers a workable solution that sidesteps both the democratic legitimacy problem of SSO-led ethics definition and the false safety of consensus tracking. Under this framework, standardisation bodies would create standards specifying minimum technical testing requirements, mandatory documentation obligations, and public reporting thresholds. Rather than prescribing what "fairness" means universally, these standards would ensure that the information necessary for local stakeholders—those with contextual knowledge and democratic legitimacy—to make informed ethical judgements is systematically gathered, recorded, and made accessible. This approach directly maps onto the documentation, impact assessment, and transparency reporting requirements of ISO/IEC 42001:2023, giving enterprises a concrete operational framework for compliance.
What This Means for Taiwan AI Governance and Compliance Strategy
The implications of this research for Taiwanese enterprises are immediate and strategic. Three specific dimensions demand executive attention.
EU AI Act enforcement timelines are closer than most Taiwanese companies realise. The EU AI Act entered into force on 1 August 2024. Provisions banning unacceptable-risk AI systems became applicable from February 2025. Critically, the comprehensive obligations for high-risk AI systems—covering conformity assessment, technical documentation, human oversight, and post-market monitoring—become fully applicable from 2 August 2026. Any Taiwanese company whose AI products or services are used by EU customers, or who supplies AI technology to EU-based enterprises, falls within the Act's extraterritorial jurisdiction. With less than 18 months until full high-risk applicability, the time to begin ISO 42001 implementation is now, not after the deadline has passed.
ISO 42001 certification is the most credible compliance foundation available to Taiwanese enterprises today. ISO/IEC 42001:2023, the world's first international standard for AI management systems, creates a documented, auditable governance framework that directly supports EU AI Act compliance. The paper's Ethical Disclosure by Default pathway operationally mirrors ISO 42001's requirements in Chapter 6 (AI risk assessment and planning), Chapter 8 (operational controls and AI impact assessment documentation), and Chapter 9 (performance evaluation and transparency reporting). Taiwanese companies holding ISO 42001 certification demonstrate to EU regulators, customers, and partners that their AI governance is systematic, documented, and independently verifiable—not ad hoc and self-asserted.
Taiwan's AI Basic Act creates a converging domestic compliance requirement. Taiwan's Artificial Intelligence Basic Act (人工智慧基本法), enacted in 2024, establishes foundational principles of AI transparency, accountability, and human-centricity that parallel the EU AI Act's value framework. As implementing regulations and sub-laws are developed, enterprises that have already established ISO 42001-aligned governance frameworks will be positioned to demonstrate compliance with domestic requirements as they crystallise—avoiding the risk of running two parallel compliance programmes simultaneously. The paper's argument that ethical decision-making should be localised to stakeholders with contextual legitimacy resonates directly with the AI Basic Act's emphasis on Taiwan-specific societal values and democratic accountability.
How Winners Consulting Services Co. Ltd. Helps Taiwanese Enterprises Navigate This Transition
積穗科研股份有限公司(Winners Consulting Services Co. Ltd.)helps Taiwanese enterprises build AI management systems compliant with ISO 42001 and EU AI Act requirements, conduct AI risk classification assessments, and ensure AI applications align with Taiwan's AI Basic Act framework. In response to the specific governance challenges identified in this research, Winners offers three concrete service pathways:
- AI System Risk Classification and Ethical Disclosure Gap Assessment (Addressing Core Finding 1 & 2): Using the EU AI Act's four-tier risk classification framework (unacceptable risk, high risk, limited risk, minimal risk), Winners conducts a comprehensive audit of all AI systems in an enterprise's portfolio, identifies which systems require priority Ethical Disclosure by Default documentation, and maps existing practices against EU AI Act conformity requirements. This prevents enterprises from discovering compliance gaps at the point of regulatory audit rather than during proactive preparation.
- ISO 42001 Gap Analysis and 90-Day Implementation Roadmap (Addressing Core Finding 3): Winners conducts a structured gap analysis against ISO/IEC 42001:2023, covering AI policy documentation, risk management processes, AI impact assessment procedures, transparency reporting mechanisms, and stakeholder communication frameworks. The output is a prioritised, timeline-specific implementation roadmap designed to achieve meaningful compliance infrastructure before the EU AI Act's August 2026 high-risk provisions take effect.
- Localised AI Ethics Decision Framework Design (Implementing the Paper's Core Proposition): Winners translates the paper's Ethical Disclosure by Default concept into an operational internal governance structure—whether an AI Ethics Review Committee, an algorithmic impact assessment workflow, or a structured stakeholder consultation process. This framework satisfies ISO 42001's Chapter 5 (Leadership and Commitment) and Chapter 6 (Planning) requirements while embedding the localised, context-sensitive ethical decision-making that both the EU AI Act and Taiwan's AI Basic Act demand.
Winners Consulting Services Co. Ltd. offers a complimentary AI Governance Mechanism Diagnostic, helping Taiwanese enterprises establish an ISO 42001-aligned management framework within 90 days—ready for EU AI Act high-risk provisions taking full effect in August 2026.
Apply for Free AI Governance Diagnostic →Frequently Asked Questions
- What does "Ethical Disclosure by
Was this article helpful?
Related Services & Further Reading
Want to apply these insights to your enterprise?
Get a Free Assessment