Winners Consulting Services Co., Ltd. points out that when companies deploy high-risk AI decision-making systems, "explanation" itself does not equal "understanding." A qualitative study of 30 participants found that individual differences profoundly affect users' perceptions of algorithmic fairness. This has direct and quantifiable implications for Taiwanese companies designing Explainable Artificial Intelligence (XAI) mechanisms under the ISO 42001 compliance framework.
Paper Source: On the Impact of Explanations on Understanding of Algorithmic Decision-Making (Timothée Schmude, Laura Koesten, T. Möller, arXiv, 2023)
Original Link: https://doi.org/10.1145/3593013.3594054
About the Authors and This Study
The first author of this study, Timothée Schmude, is affiliated with the University of Vienna and focuses on Human-Computer Interaction (HCI) and the understandability of algorithmic decision-making. Co-author Laura M. Koesten, with an h-index of 9 and over 490 citations, is an impactful researcher in the European data and AI usability field. Formerly at the University of Oxford, she is now an associate professor at the University of Vienna, possessing high academic credibility in the intersection of open data, data description, and AI transparency. The third author, T. Möller, is from the same research group.
This paper was published in 2023 at the ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT), with the original DOI https://doi.org/10.1145/3593013.3594054. It has since accumulated 20 citations, a number that reflects its sustained influence within the XAI policy and practice communities for a qualitative HCI study. Notably, ACM FAccT is a highly regarded academic platform closely followed by EU AI policymakers and corporate compliance experts.
Explanation Is Not Understanding: The Core Blind Spot of XAI Research and Its Governance Implications
This research reveals a fundamental insight for corporate AI governance: even when explanations are provided, significant differences persist in individual users' understanding of high-risk algorithmic decision-making (ADM) systems, and these differences directly influence their judgments of system fairness. The researchers used Wiggins & McTighe's "Six Facets of Understanding" framework as an analytical tool to systematically compare three different explanation methods.
Key Finding 1: The Three Explanation Modes Have Different Efficacy Boundaries; Dialogue-Based Explanations Best Reveal Individual Differences in Understanding
The study employed textual, dialogue, and interactive modes of explainability to describe the logic of a high-risk ADM system to 30 participants. The results showed that dialogue-based explanations were most effective at prompting participants to engage in active reflection and critical evaluation. During these dialogues, participants spontaneously revealed more of their emotional and rational judgments about the algorithm's fairness, rather than passively receiving information. This finding has direct implications for companies designing stakeholder communication mechanisms for high-risk AI systems: static documents (such as PDF manuals or brief model cards) may be insufficient to meet the "understandability" standard required by compliance frameworks.
Key Finding 2: Individual Differences in Understanding Are a Hidden Variable in Algorithmic Fairness Assessment
The study's inductive analysis further indicated that participants' assessments of the ADM system's fairness were influenced by an interplay of their personal background knowledge, emotional responses, and depth of understanding. In other words, the same explanation mechanism can produce vastly different perceptions of fairness among stakeholders from different backgrounds. This directly echoes the core issue in the European Commission's current investigations into the transparency of algorithms on platforms like TikTok and Grok: the design of system transparency must account for user heterogeneity, rather than assuming a one-size-fits-all explanation will suffice.
Implications for AI Governance in Taiwan: The Obligation to Explain High-Risk Systems Is More Than Just "Providing an Explanation"
When pursuing AI governance compliance, Taiwanese companies often simplify "explainability" to mean "providing documentation." However, this study clearly indicates that this approach has blind spots in both regulatory and practical terms, warranting reflection on three levels.
First, regarding ISO 42001, this international standard for AI management systems, officially published in 2023, explicitly requires companies to establish traceable and verifiable explanation mechanisms in Clause 6.1 (Actions to address risks and opportunities) and Clause 8.4 (Transparency and explainability of AI systems). This study's finding on "individual differences in understanding" serves as a reminder to Taiwanese companies that ISO 42001 compliance verification cannot stop at "whether an explanation was produced" but must further assess "whether the explanation was correctly understood."
Second, concerning the EU AI Act, Article 13 (Transparency and provision of information to users) sets clear requirements for user explanations for high-risk AI systems. The European Commission's 2026 draft guidance on AI regulation has further detailed these corporate transparency obligations. This study's comparison of three explanation modes provides an evidence-based reference for companies designing explanation mechanisms that comply with Article 13.
Third, in the context of Taiwan's AI Basic Law, the law establishes principles of transparency and accountability for AI applications, requiring high-risk AI systems to provide adequate explanations to stakeholders. Combined with this study's findings, "adequate explanation" should be interpreted to include the diversity and appropriateness of explanation methods, rather than determining compliance based solely on the existence of a document.
Furthermore, as a core mechanism for ISO 42001 and EU AI Act compliance, the credibility of Algorithmic Impact Assessments (AIA) would be significantly undermined if the variable of "understanding differences" is not included. Companies should incorporate the degree of user understanding of explanations as a metric when conducting AIAs.
Winners Consulting Services Helps Taiwanese Companies Build XAI Governance Mechanisms Centered on "Understanding Verification"
Winners Consulting Services Co., Ltd. assists Taiwanese companies in establishing AI management systems compliant with ISO 42001 and the EU AI Act, conducting AI risk classification, and ensuring that AI applications align with Taiwan's AI Basic Law. In response to this study's key findings, Winners Consulting Services recommends that Taiwanese companies implement the following three actions sequentially within a 7- to 12-month implementation cycle:
- Months 1-3: Identify Gaps in Existing Explanation Mechanisms for High-Risk AI Systems—Cross-reference ISO 42001 Clause 8.4 and EU AI Act Article 13 to review the format, target audience, and understanding verification mechanisms of current documentation. Specifically, identify which systems rely solely on static documents and lack dialogue-based or interactive explanation channels.
- Months 4-8: Design Differentiated Explanation Strategies Based on Stakeholder Types—Drawing on the three explanation modes from this study (textual, dialogue, interactive), design tailored explanation strategies for different user groups (e.g., general public affected by the ADM system, internal auditors, regulatory bodies). Establish metrics to evaluate explanation effectiveness (e.g., comprehension test pass rates, changes in appeal rates). Concurrently, implement the Mechanistic Interpretability technical framework to enhance the technical credibility of explanations.
- Months 9-12: Establish a Continuous Understanding Verification Cycle and Integrate It into ISO 42001 Internal Audits—Incorporate "stakeholder understanding of explanations" into annual AI governance performance indicators. Regularly verify the actual effectiveness of explanation mechanisms through surveys, user testing, or structured interviews, and feed the results back into the algorithmic impact assessment process to create a closed-loop governance system capable of continuous improvement.
Winners Consulting Services Co., Ltd. offers a Free AI Governance Mechanism Diagnosis to help Taiwanese companies establish an ISO 42001-compliant management system within 7 to 12 months and conduct specialized assessments of their explanation obligations for high-risk AI systems.
Learn About AI Governance Services → Apply for a Free Diagnosis Now →Frequently Asked Questions
- If a company provides documentation for its AI decisions, does it meet the explainability requirements of ISO 42001 and the EU AI Act?
- Merely providing documentation does not fulfill compliance obligations. This study's qualitative experiment with 30 participants confirmed that the same explanation can lead to vastly different understandings among users from diverse backgrounds, thereby affecting their judgments on algorithmic fairness. ISO 42001 Clause 8.4 requires companies to establish "verifiable" transparency mechanisms, and Article 13 of the EU AI Act mandates that explanations must enable users to "correctly interpret" system outputs. Therefore, in addition to producing documentation, companies must implement understanding verification mechanisms (e.g., user testing, structured feedback collection) to prove the actual effectiveness of their explanations during an audit, rather than simply claiming compliance based on the existence of documents.
- What are the most common compliance challenges Taiwanese companies face regarding AI explainability when implementing ISO 42001?
- The three most common challenges are: First, companies often rely solely on technical documents (e.g., model cards, SHAP value charts) for explainability, which are difficult for non-technical stakeholders to understand and thus fail to meet ISO 42001's requirement for "appropriate" explanations. Second, existing explanation mechanisms are not tailored for different stakeholder groups, leading to insufficient coverage. Third, companies lack continuous monitoring metrics for explanation effectiveness, making it impossible to provide traceable evidence of improvement during ISO 42001 internal audits. The transparency principles established in Taiwan's AI Basic Law align with the direction of EU AI Act Article 13, so companies should cross-reference all three to ensure their explanation mechanisms have no blind spots.
- What are the core requirements for ISO 42001 certification, and how should Taiwanese companies plan their implementation timeline?
- ISO 42001 is the world's first international standard for AI management systems. Its core requirements include: establishing an AI policy, conducting risk assessments and classification (including identifying high-risk systems), implementing transparency and explainability mechanisms (Clause 8.4), data governance, and continuous monitoring and internal audits. We recommend a standard 7- to 12-month implementation cycle for Taiwanese companies. The first 3 months should be for a current-state diagnosis and gap analysis against all ISO 42001 clauses. Months 4-8 should focus on designing and systematically building mechanisms, including explainability solutions and algorithmic impact assessment processes. The final 9-12 months should be dedicated to internal audit simulations, personnel training, and preparing for the certification application. Companies already compliant with the EU AI Act can plan for dual compliance to save on redundant implementation costs.
- What resources are needed to establish an AI explainability mechanism compliant with ISO 42001, and what are the expected benefits?
- Resource requirements vary based on company size and AI system complexity. For a mid-sized company, the 7- to 12-month implementation period typically requires appointing one to two AI governance officers, completing at least one round of algorithmic impact assessments, and establishing a library of explanation mechanism documents. In terms of benefits, according to the European Commission's 2026 AI regulation guidelines, companies that proactively establish explainability mechanisms can significantly reduce response costs during regulatory investigations (e.g., DSA algorithm transparency audits). Furthermore, robust explanation mechanisms help lower appeal rates and legal risks arising from ADM decision disputes. ISO 42001 certification is also becoming a crucial credential for demonstrating AI governance maturity, especially in high-risk sectors like finance, healthcare, and human resources.
- Why choose Winners Consulting Services for AI governance-related issues?
- Winners Consulting Services Co., Ltd. is one of the few consulting firms in Taiwan with expertise in both ISO 42001 implementation and EU AI Act compliance. Our services cover the entire AI governance lifecycle: from AI risk classification and explainability mechanism design to algorithmic impact assessments, pre-certification audit simulations for ISO 42001, and alignment with Taiwan's AI Basic Law. We continuously track EU DSA enforcement, the latest academic research from ACM FAccT, and Taiwanese regulatory developments to ensure our advice is based on the most current regulatory and empirical foundations. We offer a free AI governance mechanism diagnosis to help companies clearly identify compliance gaps and prioritize actions before formal implementation, ensuring a more targeted and effective investment of resources.
Was this article helpful?
Related Services & Further Reading
Want to apply these insights to your enterprise?
Get a Free Assessment