Winners Consulting Services Co., Ltd. points out that a 2025 EU legal study, already cited 10 times, reveals a critical reality: the right to an explanation under GDPR faces numerous unresolved legal issues in its practical enforcement by courts and supervisory authorities—and the EU AI Act has inherited these ambiguities wholesale. If Taiwanese companies do not proactively establish operable explainability mechanisms, they will face dual compliance risks when the EU AI Act officially takes effect.
Source Paper: The right to an explanation in practice: insights from case law for the GDPR and the AI Act (Ljubiša Metikoš, J. Ausloos, arXiv, 2025)
Original Link: https://doi.org/10.1080/17579961.2025.2469349
About the Authors and This Research
The first author, Ljubiša Metikoš, is a legal scholar specializing in AI and data protection law with an h-index of 3 and 26 total citations, focusing on the protection of individual rights in automated decision-making. The second author, J. Ausloos, is a prominent scholar in European data protection law with a high h-index of 14 and 802 total citations. His research covers the right to informational self-determination, the right to be forgotten, and algorithmic accountability under GDPR, giving him significant visibility in EU regulatory academic circles.
The combination of these two authors is highly persuasive: Ausloos's extensive academic background provides a deep foundation in GDPR legal interpretation, while Metikoš brings a fresh perspective with a comparative analysis of the EU AI Act. This paper uses actual cases from various EU judicial bodies and Data Protection Authorities (DPAs) as its analytical material, making it an empirical legal study rather than a purely theoretical discourse—which is precisely why it is particularly valuable for corporate compliance practice.
Notably, the European Data Protection Board (EDPB) has recently identified the challenges of fully implementing the "right to be forgotten" and protecting privacy in AI-generated content as key priorities for its 2026-2027 work program, aligning closely with this paper's research focus. This indicates that the paper's findings are not just academic discussions but an early signal of upcoming regulatory enforcement actions.
The Practical Gap in the Right to Explanation: Three Core Issues Revealed by Case Law
The core contribution of this research is its systematic organization of the three major dimensions of dispute in the enforcement of GDPR's right to explanation, using real case law: scope, content, and the balancing exercise. The study's conclusion clearly shows that the EU AI Act's legislative design has repeated GDPR's ambiguous path, and existing GDPR case law is the best tool to fill the EU AI Act's interpretive void.
Core Finding 1: The "Scope" of the Right to Explanation Remains Unclear
After analyzing actual rulings from EU courts and DPAs, the research finds significant discrepancies among national supervisory authorities regarding which automated decision-making scenarios the "right to an explanation" applies to. For example, some DPAs consider the output of credit scoring algorithms to be "fully automated decision-making" requiring an explanation, while others favor a narrower interpretation. Although the EU AI Act's text includes a similar right to explanation, it also fails to provide a uniform and operational definition of the scope of "high-risk AI systems." For Taiwan's export-oriented companies, this means they may face entirely different enforcement standards when entering different EU member states.
Core Finding 2: Lack of a Unified Standard for the Substantive "Content" of an Explanation
The study finds that DPAs are inconsistent in their rulings on what specific content a "meaningful explanation" should include. Some cases require companies to disclose the main feature weights of a model, while others only demand an explanation of the general logic behind a decision's outcome. This problem also exists within the EU AI Act framework: the regulation requires high-risk AI systems to have transparency and explainability, but it does not specify minimum standards for "Explainable AI" technical methods. Companies that rely solely on static documentation are highly likely to fail to meet the substantive requirements of enforcement authorities.
Core Finding 3: The Challenge of Balancing the Obligation to Explain with Trade Secrets
Multiple cases show that when data subjects request an explanation, companies often cite the "protection of trade secrets" as grounds for refusal. Courts have applied inconsistent discretionary standards in such cases, and while Article 86 of the EU AI Act includes confidentiality provisions, the boundary between them and the obligation to explain is equally vague. Metikoš and Ausloos argue that the balancing logic from existing GDPR case law can provide an interpretive framework for similar conflicts under the EU AI Act—a crucial insight for companies building proactive compliance mechanisms.
Implications for AI Governance in Taiwan: Looking Beyond the Letter of the Law
The core takeaway for Taiwanese business leaders from this paper is that compliance with the EU AI Act cannot be based on the legal text alone; it must be accompanied by tracking the evolution of case law. This is especially important for Taiwanese companies that have obtained or are applying for ISO 42001 certification.
Intersection with ISO 42001: ISO 42001 Clause 6.1.2 requires organizations to establish an AI risk assessment process, where "explainability" is a core metric for evaluating high-risk AI systems. However, this paper's research clearly shows that the substantive content of "explainability" is not a single standard—it varies with the interpretations of different supervisory authorities. Therefore, ISO 42001 compliance documentation should not be static but must incorporate a dynamic updating mechanism that continuously tracks EU enforcement cases.
Intersection with the EU AI Act: The EU AI Act officially entered into force in August 2024, with provisions related to high-risk AI systems set to apply in stages starting in 2026. The GDPR cases studied in this paper are the most likely precedents that EU AI Act enforcement authorities will reference when interpreting the right to an explanation. Taiwanese companies with business in or clients from the EU should begin establishing a "response mechanism for the obligation to explain" now, rather than waiting for the official implementation guidelines of the EU AI Act to be released.
Intersection with Taiwan's AI Basic Act: Taiwan's Artificial Intelligence Basic Act (AI Basic Act) has established transparency and accountability as fundamental principles of AI governance. The "enforcement gap in the obligation to explain" revealed by this paper is likely to emerge in a similar form as Taiwan's local regulatory framework matures. Taiwanese companies that can build internal mechanisms by referencing EU enforcement practices early on will gain a first-mover advantage as local regulations become stricter.
A Constructive Critique: It is worth noting that this paper primarily uses cases from EU courts and DPAs, with little focus on the Asia-Pacific regulatory environment. When applying these insights, Taiwanese companies need to additionally assess the differences between Taiwan's Personal Data Protection Act (PDPA) and the EU framework, as well as the interpretive uncertainty arising from Taiwan's current lack of a unified coordination mechanism like the EDPB. This is precisely why Taiwanese companies need local professional consultants to assist with "cross-framework comparative analysis."
Winners Consulting Services Helps Taiwanese Companies Build Operable Mechanisms for the Obligation to Explain
Winners Consulting Services Co., Ltd. assists Taiwanese companies in establishing AI management systems that comply with ISO 42001 and the EU AI Act, conducting AI risk classification and assessment, and ensuring that artificial intelligence applications align with the regulations of Taiwan's AI Basic Act. To address the enforcement gap in the obligation to explain revealed by this paper, Winners Consulting Services offers the following specific assistance:
- Current State Assessment of the Obligation to Explain: Systematically review a company's existing AI applications to identify which fall under the GDPR/EU AI Act definitions of automated decisions or high-risk AI systems, and assess the adequacy of current explainability documentation against EDPB enforcement standards to identify gaps.
- Establishment of a Dynamic Case Law Tracking Mechanism: In accordance with ISO 42001 Clause 9 (Performance evaluation), help companies establish a mechanism to continuously monitor the rulings of EU member state DPAs and court precedents, ensuring that explainability standards are dynamically updated as regulatory practices evolve, rather than remaining as static compliance documents.
- Design of a Balancing Framework for Trade Secrets and the Obligation to Explain: Referencing the balancing logic from existing GDPR case law, assist companies in proactively designing a "tiered explanation mechanism." This allows for the protection of the confidentiality of core algorithms while building substantive explanatory capabilities that meet regulatory expectations, thereby reducing the risk of enforcement investigations.
Winners Consulting Services Co., Ltd. offers a Free AI Governance Mechanism Diagnosis to help Taiwanese companies establish an ISO 42001-compliant management system within 7 to 12 months.
Learn More About AI Governance Services → Apply for a Free Mechanism Diagnosis Now →Frequently Asked Questions
- What are the substantive differences between the right to explanation under GDPR and the EU AI Act? Do Taiwanese companies need to address them separately?
- The two are legally distinct frameworks, but as the 2025 study by Metikoš and Ausloos shows, the EU AI Act's provisions on the right to explanation largely adopt GDPR's ambiguous design, and the EDPB is expected to use existing GDPR case law as an interpretive basis. For Taiwanese companies, an integrated compliance framework is recommended. By building a verifiable explanation mechanism based on ISO 42001 Clause 8.4 and aligning it with GDPR enforcement cases, companies can meet the requirements of both frameworks simultaneously, avoiding redundant costs. With the EU AI Act's high-risk AI provisions applying from 2026, now is the ideal time to establish this mechanism.
- What are the most common compliance challenges for Taiwanese companies regarding the "obligation to explain" when implementing ISO 42001?
- The most common challenges are twofold. First, companies often treat explainability as a purely technical issue (e.g., selecting an XAI tool), overlooking that regulators are more concerned with whether the explanation is "meaningful in substance"—a core issue clearly revealed by the case law in this paper. Second, many companies' explainability documentation is static and lacks a mechanism for updates as models iterate. This is easily identified as a non-conformity during an ISO 42001 Clause 10 (continual improvement) audit. Taiwan's AI Basic Act also emphasizes the continuity of transparency, making a dynamic updating mechanism key to multi-framework compliance.
- What are the core requirements of ISO 42001 certification, and how long does implementation take?
- ISO 42001 is the world's first international standard for AI management systems, with core requirements including AI risk assessment and classification (Clause 6.1), mechanisms for explainability and transparency (Clause 8.4), AI impact assessment (Clause 8.5), and a system for continual monitoring and improvement (Clause 10). For a typical medium-sized enterprise, the journey from initial diagnosis to certification usually takes 7 to 12 months. This includes 3 months for a gap analysis, 3-6 months for system design and documentation, and 1-3 months for internal audits and certification preparation. Taiwan's AI Basic Act's transparency requirements are highly compatible and can be integrated into the planning.
- How can the costs and expected benefits of establishing an AI explanation obligation compliance mechanism be evaluated?
- From a risk management perspective, the cost of establishing a compliance mechanism is far lower than the potential fines. The EU AI Act imposes fines of up to €30 million or 6% of global annual turnover (whichever is higher) for non-compliance of high-risk AI systems. Existing GDPR cases involving failure to fulfill the obligation to explain have already resulted in fines exceeding one million euros. The cited paper shows that static compliance documents are often deemed insufficient by DPAs during investigations. Furthermore, ISO 42001 certification is increasingly becoming a standard requirement in B2B procurement within the EU market, offering dual benefits of risk management and market access.
- Why choose Winners Consulting Services for assistance with AI governance issues?
- Winners Consulting Services Co., Ltd. possesses cross-framework integration capabilities, with expertise in the ISO 42001 standard, EU AI Act requirements, Taiwan's AI Basic Act, and GDPR enforcement practices. The "practical gap in the obligation to explain" revealed in this paper highlights that companies need more than just an understanding of legal texts; they need a partner who can track the evolution of case law and translate it into internal operational mechanisms. Winners Consulting Services provides end-to-end services, from diagnosis and system design to implementation support and certification assistance, helping Taiwanese companies establish a sustainable AI governance mechanism within 7 to 12 months, and offers a free mechanism diagnosis to assess current gaps.