Winners Consulting Services Co., Ltd. advises executives in Taiwan that the final text of the EU's Artificial Intelligence Act (EU AI Act) is not a product of pure technical rationality but the result of political compromise under pressure among the three main EU institutions. This means that when conducting statutory interpretation, companies must consider the legislative background. The explicit wording of the articles often conceals unresolved gaps in fundamental rights protection, which will gradually surface over the next 3 to 5 years during the enforcement and judicial interpretation phases, directly impacting the AI compliance strategies of Taiwanese enterprises.
Source Paper: The AI Act Roller Coaster: The Evolution of Fundamental Rights Protection in the Legislative Process and the Future of the Regulation (F. Palmiotto, arXiv, 2025)
Original Link: https://doi.org/10.1017/err.2024.97
About the Author and This Research
F. Palmiotto is an emerging scholar in European AI law with impressive academic metrics: an h-index of 8 and 485 total citations. This paper has already been cited 11 times since its publication in 2025, indicating its rapid influence in the academic community. Palmiotto's research focuses on the institutional analysis of AI regulation, particularly using "process-tracing" to deconstruct the institutional dynamics and political struggles within complex legislative processes.
The value of this paper lies not in explaining what the EU AI Act's articles state, but in revealing "why these articles took their current form." For compliance officers who need to predict future enforcement trends, this legislative history perspective is of immense practical value.
The Legislative Roller Coaster: How Political Compromise Shaped the AI Fundamental Rights Framework
Palmiotto's central argument is that the final text of the EU AI Act is the product of a political agreement reached in an extremely short timeframe during an unprecedentedly intense redrafting process among the European Parliament, the Council of the EU, and the European Commission. This process systematically affected the strength and consistency of fundamental rights protections.
Key Finding 1: The Strength of Fundamental Rights Protection Fluctuated Non-Linearly
Using process-tracing, the paper maps the evolution from the Commission's initial 2021 draft and the Parliament's 2023 amended position to the final trilogue text. The research finds that fundamental rights provisions did not strengthen linearly but followed a "roller coaster" pattern. Restrictions on certain high-risk AI systems were significantly tightened in the Parliament's version, only to be weakened in the final trilogue negotiations. Conversely, some transparency requirements were unexpectedly strengthened in the final stages. This non-linear evolution creates internal logical tensions between different articles in the final text.
Key Finding 2: Enforcement and Judicial Interpretation Will Be the Main Arena for Filling Legislative Gaps
Palmiotto points out that because the political compromises in the legislative process left many gray areas, the main battleground for statutory interpretation will shift to the administrative discretion of enforcement bodies and the judicial interpretation of courts. The paper highlights that between 2026 and 2027, as the compliance deadline for high-risk AI systems (extended to December 2, 2027) approaches, administrative interpretations of the boundaries of the "high-risk" definition will directly determine the compliance obligations of thousands of companies. For Taiwanese enterprises, this means that merely reading the static text of the law is insufficient for true compliance preparation.
Key Finding 3: Differing Mandates of the Three Institutions Created Structural Contradictions
The European Commission, Parliament, and Council each have different policy objectives and mandates. The Commission focuses on the competitiveness of the single market, the Parliament emphasizes fundamental rights protection, and the Council represents the law enforcement realities of member states. The divergence of these three objectives has left structural contradictions in the final text. For example, there is a clear interpretive tension between the articles prohibiting certain biometric identification AI systems and the exceptions allowing law enforcement agencies to use such systems under specific conditions. Companies that fail to understand this institutional context are highly likely to make incorrect assumptions in their compliance practices.
Threefold Strategic Importance for AI Governance in Taiwan
Taiwanese enterprises that export products or services to the EU market or have supply chain relationships with EU companies must immediately recalibrate their AI compliance strategies. This means not just reading the articles, but understanding the political logic and future enforcement trends behind them.
First: Static compliance checklists are insufficient. Palmiotto's research reveals that the list of high-risk AI systems in the EU AI Act is a product of political compromise, not a final state of technical rationality. The European AI Board's meeting on March 20, 2026, has already shown that negotiations for an "AI Digital Omnibus Act" are ongoing, and the second draft of the code of conduct for labeling AI-generated content is still being revised. If Taiwanese companies rely solely on the current list to assess the risk level of their AI systems, they will face significant uncertainty as enforcement details are refined over the next 12 to 24 months.
Second: ISO 42001 is the best buffer against uncertainty. Precisely because the EU AI Act has legislative gaps, companies need a dynamic AI management framework, not a static compliance checklist. The design logic of ISO 42001—driving continuous improvement of the management system through risk assessment—is perfectly suited to fill the gray areas that the static articles of the EU AI Act cannot cover. Specifically, Clause 6.1.2 of ISO 42001 requires companies to establish a systematic AI risk identification mechanism. This capability will enable companies to respond quickly to evolving enforcement interpretations of the EU AI Act, rather than passively waiting for specific guidance from regulatory authorities.
Third: Taiwan's AI Basic Act should learn from the EU's legislative lessons. Taiwanese scholars have called on the government to draft an "AI Basic Act" with reference to international trends. Palmiotto's research provides a valuable cautionary tale: if Taiwan's AI Basic Act is also drafted under tight deadlines through political compromise among multiple ministries, it is likely to replicate the structural contradictions of the EU's text. Taiwan should establish clear risk classification standards in its AI Basic Act from the outset and explicitly authorize a specific agency for statutory interpretation to avoid future enforcement gaps.
How Winners Consulting Services Helps Taiwanese Enterprises Navigate AI Compliance Amidst Legislative Uncertainty
Winners Consulting Services Co., Ltd. helps Taiwanese enterprises establish AI management systems that comply with ISO 42001 and the EU AI Act, conduct AI risk classification assessments, and ensure that artificial intelligence applications align with Taiwan's AI Basic Act. In response to the legislative uncertainty revealed by Palmiotto's research, we offer the following three concrete action recommendations:
- Establish a "Dual-Track" AI Risk Classification Mechanism: Use the dynamic risk assessment framework of ISO 42001 as an internal baseline while simultaneously tracking the evolution of EU AI Act enforcement interpretations (including administrative guidance from the European AI Board and court rulings). This ensures the company's AI system classification keeps pace with regulatory reality. We particularly recommend shortening the update cycle for technical documentation from annually to quarterly to prepare for the enforcement peak in 2026-2027.
- Incorporate "Political Context Reading" into Compliance Intelligence Procedures: Palmiotto's research methodology is itself a corporate compliance tool. We advise companies to regularly analyze the shifting positions of the three main EU institutions on AI regulatory issues (such as policy changes with the Council presidency or reports from parliamentary committees). These political signals should be integrated into the preliminary scenario analysis of AI risk assessments to anticipate shifts in enforcement priorities.
- Build a Compliance Documentation System for the Right to an Explanation: The legislative compromises on fundamental rights protection clauses in the EU AI Act mean that courts are most likely to strengthen judicial interpretation based on "fundamental rights protection." Companies should proactively establish a verifiable explanation mechanism compliant with Clause 8.4 of ISO 42001 to ensure that automated decisions from high-risk AI systems can be meaningfully explained, rather than merely relying on static documents to pass compliance audits.
Winners Consulting Services Co., Ltd. offers a free AI governance mechanism diagnosis to help Taiwanese enterprises establish an ISO 42001-compliant management system within 7 to 12 months.
Learn About AI Governance Services → Apply for a Free Diagnosis Now →Frequently Asked Questions
- Given the political compromises in the EU AI Act, how can Taiwanese enterprises determine if their AI systems are "high-risk"?
- Determining if an AI system is high-risk requires more than just consulting the static list in Annex III of the EU AI Act; companies must also reference administrative guidance and enforcement cases from the European AI Board. Palmiotto's research reveals that the definition of "high-risk" was repeatedly adjusted during the legislative process and will be further refined through enforcement interpretations. We recommend that enterprises update their AI system risk level assessments quarterly under the risk identification framework of ISO 42001's Clause 6.1.2, incorporating the latest administrative interpretations. Fines for non-compliant high-risk systems can reach up to 3% of global annual turnover, making the cost of misclassification extremely high.
- What are the most common challenges for Taiwanese enterprises when aligning ISO 42001 implementation with EU AI Act compliance?
- The most common challenge is the "framework mismatch," as ISO 42001 follows a management system logic emphasizing continuous improvement, while the EU AI Act uses a product compliance logic requiring specific technical documentation before market entry. Many Taiwanese companies mistakenly assume that ISO 42001 certification equals EU AI Act compliance, but they actually require separate documentation systems. Specifically, the Technical Documentation required by Article 11 of the Act differs significantly in format and depth from the documented information requirements in Clause 7.5 of ISO 42001. Additionally, the direction of liability regulations in Taiwan's draft AI Basic Act must be considered, necessitating a dual-track documentation strategy.
- What are the practical implementation steps and timeline for ISO 42001 certification?
- ISO 42001 certification implementation typically involves four stages. The first stage (months 1-3) is a gap analysis of the current state. The second stage (months 3-6) involves designing the management system, including AI risk assessment policies and procedures. The third stage (months 6-9) focuses on system implementation and personnel training, establishing an AI risk register and monitoring indicators. The final stage (months 9-12) covers internal audits and the certification application. For a mid-sized Taiwanese enterprise, the entire process to achieve certification averages 7 to 12 months. Given the December 2, 2027 compliance deadline for high-risk AI systems under the EU AI Act, companies should start this process at least 18 months in advance.
- How should the costs and expected benefits of implementing ISO 42001 be evaluated?
- The direct costs of implementing ISO 42001, including consulting, training, and auditing fees, typically range from NT$800,000 to NT$3 million, depending on the company's size and AI system complexity. However, the benefits should be assessed from three perspectives: first, avoiding EU AI Act fines (up to 3% of global annual turnover), which could amount to NT$30 million for a company with NT$1 billion in revenue; second, enhancing trust with EU buyers and partners to maintain market access; and third, building systematic AI risk management capabilities to reduce operational risks. Based on our experience, companies usually see a return on their initial investment within 12 to 18 months.
- Why choose Winners Consulting Services for assistance with AI governance issues?
- Winners Consulting Services Co., Ltd. offers three core advantages in AI governance. First, we have in-depth expertise in the dual compliance requirements of ISO 42001 and the EU AI Act, helping clients avoid "superficial compliance" and ensure their management systems are genuinely effective. Second, we excel at translating international academic research, like Palmiotto's legislative analysis, into actionable strategies for Taiwanese enterprises, providing insights beyond standard interpretations. Third, we are familiar with the direction of Taiwan's AI Basic Act, enabling us to help companies prepare for local regulations while complying with the EU AI Act. We offer a full range of services, from free diagnostics to complete certification support, to establish a sustainable AI governance mechanism within 7 to 12 months.
Was this article helpful?
Related Services & Further Reading
Want to apply these insights to your enterprise?
Get a Free Assessment