Winners Consulting Services Co., Ltd. points out that the European Union's EU AI Act, while seemingly stringent, harbors a structural tension between two major legal traditions: "product safety" and "fundamental rights protection." This inherent contradiction will not only affect the compliance practices of local EU companies but will also directly impact Taiwanese enterprises that rely on the EU market or adopt the EU's AI governance framework as their internal standard. For any executive planning an ISO 42001 implementation or assessing AI risk levels, understanding this fissure is the first step to avoiding superficial compliance.
Paper Source: The EU AI Act: Between the rock of product safety and the hard place of fundamental rights (M. Almada, N. Petit, arXiv, 2025)
Original Link: https://doi.org/10.54648/cola2025004
About the Authors and This Research
This paper was co-authored by two researchers from different academic backgrounds, creating a complementary analytical framework. N. Petit is a renowned scholar in European competition law and technology law, with an h-index of 12 and over 1,159 citations. He is affiliated with top institutions like the European University Institute and has long provided influential academic perspectives on the EU's AI regulatory debates. M. Almada specializes in interdisciplinary research in law and technology. Although his citation count is still growing, his collaboration with Petit lends the paper both theoretical depth and a practical policy perspective.
Published in 2025, this paper has already accumulated 17 citations, including one high-impact citation, indicating its rapid traction within the AI governance academic community. As a study that challenges the two major EU legal traditions of "product safety" and "fundamental rights," its policy implications extend far beyond academia, directly influencing corporate compliance decisions.
The EU AI Act's Dual Traditions: A Structural Conflict Between Product Safety and Fundamental Rights
Almada and Petit's core argument is clear: the Artificial Intelligence Act attempts to merge two vastly different EU legal traditions into a single regulation. However, these traditions have fundamental differences in their design logic, enforcement mechanisms, and attribution of liability. If these differences are not addressed, the regulation will face a dual dilemma in theory and practice during its implementation.
Key Finding 1: The "Pacing Problem" Prevents Static Regulation from Keeping Up with Dynamic AI Risks
The paper cites the "Pacing Problem" from technology law literature, pointing out that the evolution of AI technology far outpaces the legislative process. The EU AI Act's list of high-risk AI systems (Annex III) is static. Although a revision mechanism exists, its speed is structurally lagging behind technological reality. This means an AI system deemed low-risk today could cause significant fundamental rights infringements tomorrow without the list being updated. Companies that operate solely based on the list will fall into the trap of superficial compliance.
Key Finding 2: Conflicting "Regulatory Perspectives" Lead to Inconsistent Enforcement
The product safety tradition presupposes "identifiable, tangible, and testable" harms, emphasizing pre-market standardization and market access control. In contrast, the fundamental rights tradition emphasizes case-by-case assessment, the principle of proportionality, and post-hoc remedies. The EU AI Act invokes both logics without providing a clear principle of priority, which could lead competent authorities to adopt vastly different stances when interpreting specific provisions. This divergence in statutory interpretation will become the most unpredictable uncertainty for companies engaged in cross-border compliance.
Key Finding 3: "Institutional Path Dependence" Limits Innovative Regulatory Experiments
The paper's third analytical theme, "Institutional Path Dependence," notes that the competent authorities in EU member states often continue their existing regulatory cultures and administrative practices. This means that even with the unified legal text of the EU AI Act, actual enforcement may vary by country. For Taiwanese exporters or companies operating in multiple EU member states, this will significantly increase the complexity of compliance management.
Strategic Implications for AI Governance Practices in Taiwan
The structural contradictions of the EU AI Act have far deeper implications for Taiwanese companies than what superficial compliance can cover. When planning their AI governance frameworks, Taiwanese enterprises cannot rely solely on a literal interpretation of the EU AI Act's text; they must establish internal mechanisms capable of responding to "interpretive uncertainty."
First, regarding ISO 42001, its risk management framework is perfectly designed to supplement the structural gaps in the EU AI Act. ISO 42001 requires companies to establish a dynamic, sustainable AI risk assessment mechanism, rather than relying on a static list. This directly echoes the paper's critique of the EU AI Act's "pacing problem." If a company has already established a continuous monitoring mechanism according to ISO 42001, it can proactively identify and manage emerging risks before the regulatory list is updated, thus avoiding the illusion of compliance.
Second, the "conflicting regulatory perspectives" identified in the paper offer direct reference value for Taiwan as it deliberates on its "AI Basic Act." Scholars widely call for Taiwan to establish an AI governance legal foundation suited to its national context, including clear risk assessment mechanisms and liability rules. If Taiwan can clarify the application boundaries between the "product safety perspective" and the "fundamental rights perspective" when drafting its AI Basic Act, it can avoid repeating the institutional design flaws of the EU AI Act.
Third, for Taiwanese companies already providing AI products or services in the EU market, the "institutional path dependence" revealed in the paper means they must develop differentiated compliance strategies for different member states, rather than assuming a uniform enforcement environment across the EU. Article 27 of the EU AI Act requires that high-risk AI impact assessment documents be retained for at least 10 years. The interpretation and auditing of this obligation may differ among member states, and companies should establish a country-specific compliance documentation management system in advance.
How Winners Consulting Services Helps Taiwanese Companies Navigate Regulatory Uncertainty
Winners Consulting Services Co., Ltd. assists Taiwanese companies in establishing AI management systems that comply with ISO 42001 and the EU AI Act, conducting AI risk classification assessments, and ensuring that artificial intelligence applications align with Taiwan's AI Basic Act regulations. In response to the structural contradictions revealed in the paper, we offer the following concrete action recommendations:
- Establish a "Dual-Track Risk Assessment" Mechanism: Simultaneously reference the EU AI Act's high-risk list (static) and the ISO 42001 dynamic risk assessment framework. This ensures that the company identifies potential risk gaps before the list is updated, avoiding compliance blind spots caused by regulatory lag.
- Develop a Standard Operating Procedure for "Managing Regulatory Interpretation Uncertainty": For clauses in the EU AI Act where the "product safety" and "fundamental rights" logics may conflict, pre-establish an internal decision-making mechanism for statutory interpretation and document the basis for these interpretations. This provides a traceable compliance record for future audits by competent authorities.
- Initiate a Preparatory Assessment for Taiwan's AI Basic Act: By integrating academic trends in the deliberation of Taiwan's AI Basic Act, proactively inventory the risk levels of existing AI systems. This ensures that before local legislation is enacted, the company already has a governance structure that meets the requirements for high-risk AI systems, gaining a first-mover advantage.
Winners Consulting Services Co., Ltd. offers a Free AI Governance Mechanism Diagnosis to help Taiwanese companies establish an ISO 42001-compliant management system within 7 to 12 months.
Learn About AI Governance Services → Apply for a Free Diagnosis Now →Frequently Asked Questions
- How should companies address the EU AI Act's 'dual standard' of using both product safety and fundamental rights regulatory logics?
- Companies should treat the two logics as complementary, not conflicting, by establishing separate but corresponding assessment processes. The product safety logic requires completing Technical Documentation and conformity assessments before market launch and registering with EU authorities, while the fundamental rights logic demands a continuous Fundamental Rights Impact Assessment. It is advisable to implement the ISO 42001 framework as an integrated management platform to incorporate both requirements into a single risk management cycle. Article 27 of the EU AI Act mandates retaining high-risk AI documentation for at least 10 years, aligning with the accountability needs of both logics. Fulfilling only one creates a risk of superficial compliance.
- What are the most common EU AI Act compliance challenges for Taiwanese companies when implementing ISO 42001?
- The three most common challenges are risk misclassification, insufficient documentation, and cross-border complexity. First, companies often misclassify risk by relying solely on the EU AI Act's static Annex III list, which lags behind technological advancements, leading to underestimated risks; ISO 42001's dynamic risk assessment can fill this gap. Second, documentation often lacks the depth required for independent audits, with many firms only maintaining superficial records. Third, institutional path dependence among member states results in inconsistent enforcement, requiring country-specific compliance strategies. As Taiwan's AI Basic Act is being developed with similar risk assessment requirements, early alignment is crucial for future compliance.
- What are the core requirements for ISO 42001 certification, and how long does it take for a Taiwanese company to implement it?
- The core requirements of ISO 42001, the first international standard for AI management systems, include establishing an AI governance policy, conducting systematic AI risk assessments, ensuring transparency and explainability, implementing continuous monitoring, and maintaining comprehensive documented evidence. For a mid-sized Taiwanese company, the implementation process from initial diagnosis to certification typically takes 7 to 12 months. This timeline includes 3 months for gap analysis and system design, 3-6 months for implementation and training, and 1-3 months for internal audits and third-party certification. When aligning with the EU AI Act, over 70% of documentation requirements can be shared, significantly reducing redundant costs.
- How can the costs and benefits of implementing an AI governance framework be realistically assessed?
- The costs of implementing ISO 42001 and EU AI Act compliance frameworks should be evaluated as a risk-adjusted return on investment, not merely as an expense. For Taiwanese SMEs, costs typically range from NT$1.5 to 5 million, covering consulting, system setup, and training. The benefits are tangible: ISO 42001 certification provides a significant advantage in EU procurement and partner evaluations. Furthermore, compliance helps avoid substantial fines under the EU AI Act, which can reach up to 6% of global annual turnover. An established governance framework also shortens the adaptation period for upcoming local regulations like Taiwan's AI Basic Act, making the investment strategically valuable.
- Why choose Winners Consulting Services for assistance with AI governance issues?
- Winners Consulting Services Co., Ltd. is a premier choice because it is one of the few Taiwanese consulting firms with expertise in ISO 42001 implementation, EU AI Act analysis, and research on Taiwan's AI Basic Act. Our interdisciplinary team of legal, IT, and management experts translates complex academic research into actionable corporate compliance strategies. Our methodology combines gap analysis, risk classification, documentation design, and employee training to ensure clients not only achieve certification but also build sustainable AI governance capabilities. We offer a free AI governance diagnosis to help companies establish an internationally compliant management system within 7 to 12 months, making us a trusted partner in navigating the complex AI regulatory landscape.
Was this article helpful?
Related Services & Further Reading
Want to apply these insights to your enterprise?
Get a Free Assessment