Winners Consulting Services Co. Ltd. (積穗科研股份有限公司), Taiwan's expert in AI Governance, brings a critical insight to enterprise leaders: when your AI system cannot be fully explained, opacity itself can be ethically governed — and a 2025 academic framework called LoBOX shows exactly how. For Taiwanese companies preparing for ISO 42001 certification or EU AI Act compliance, this research offers a practical, institutionally grounded alternative to the impossible standard of complete AI transparency.
Paper Citation: Opacity as a Feature, Not a Flaw: The LoBOX Governance Ethic for Role-Sensitive Explainability and Institutional Trust in AI (Francisco Herrera, Reyes Calderón, arXiv — AI Governance & Ethics, 2025)
Original Paper: http://arxiv.org/abs/2505.20304v1
About the Authors and This Research
Francisco Herrera is a Spanish AI researcher specializing in explainable artificial intelligence (XAI), fuzzy logic, and AI ethics governance, with a cumulative citation count of 365 and an h-index of 3. His co-author H. Reyes-Calderón brings expertise in institutional trust design and AI ethics frameworks. The paper was published in 2025 on arXiv under the AI Governance & Ethics category — a deliberate choice of open-access preprint distribution that reflects the urgent, real-time nature of AI governance research in response to emerging regulatory requirements worldwide.
The significance of this paper lies not in citation metrics alone, but in the precision with which it addresses the most persistent contradiction in AI governance practice: regulators demand transparency, yet many AI systems — particularly deep learning models and large language models — are structurally incapable of full explainability. LoBOX is the authors' institutional answer to this contradiction, and it aligns remarkably well with EU AI Act Article 13 (transparency obligations) and ISO 42001 Clause 7.5 (documented information).
Opacity Is Not a Bug: LoBOX Reframes How We Build Trust in AI Systems
The central argument of this research is both provocative and practically liberating: AI opacity should not be treated as a design flaw to be eliminated, but as a condition to be actively and ethically governed. The LoBOX (Lack of Belief: Opacity & eXplainability) framework provides a structured, three-stage governance pathway that shifts the foundation of AI trustworthiness from complete explainability to institutional accountability.
Core Finding 1: The Three-Stage Opacity Governance Pathway
LoBOX proposes a sequential governance pathway consisting of three stages. The first stage, Reduce Accidental Opacity, addresses opacity that arises from correctable design choices — unnecessary complexity, poor documentation, or inadequate model selection for the given use case. The second stage, Bound Irreducible Opacity, requires organizations to honestly acknowledge and formally delimit the opacity that is inherent to certain AI architectures (such as neural networks) and cannot be eliminated without sacrificing performance. The third stage, Delegate Trust through Structured Oversight, establishes institutional accountability mechanisms that allow stakeholders to extend trust to AI systems even in the absence of full technical understanding. A dynamic governance loop connects all three stages, ensuring the framework remains responsive as technology and stakeholder expectations evolve. This approach directly supports ISO 42001 Clause 6.1 (risk and opportunity management) and provides a concrete methodology for what the Taiwan AI Basic Act framework describes as "accountable AI application."
Core Finding 2: Role-Sensitive Explainability via the RED/BLUE XAI Model
LoBOX integrates the RED/BLUE XAI model, which distinguishes between two fundamentally different explainability needs. The RED dimension addresses the explanatory requirements of regulators, internal auditors, and compliance officers — stakeholders who need systematic, verifiable, and technically precise explanations of AI behavior. The BLUE dimension addresses the explainability needs of end users, business managers, and individuals affected by AI decisions — stakeholders who need contextually relevant, accessible, and actionable explanations. This role-sensitive architecture directly responds to EU AI Act Article 13's requirement that high-risk AI systems provide "appropriate explanations" — a requirement that the Act deliberately leaves open to contextual interpretation. The authors also highlight that cultural and institutional trust contexts vary significantly across different societies, a finding with direct relevance to Taiwan's unique governance culture and the cross-border AI deployment challenges faced by Taiwanese enterprises operating in global markets.
What This Means for Taiwan's AI Governance Practice: Compliance Is About Accountability, Not Just Transparency
The timing of this research is particularly significant for Taiwan. The Taiwan AI Basic Act (人工智慧基本法) draft framework emphasizes human-centered AI principles and enterprise accountability obligations. The EU AI Act, in force since August 2024, is being phased in with full high-risk AI system compliance requirements expected by 2027 — affecting every Taiwanese company with EU market exposure. ISO 42001, the world's first international standard for AI management systems, contains clauses — particularly 6.1 (risk identification) and 8.4 (AI system impact assessment) — that implicitly require governance frameworks for opaque AI systems.
The most dangerous misconception among Taiwanese enterprise leaders is equating "AI transparency" with "complete explainability." LoBOX clarifies that when AI systems — such as large language models or deep neural networks — cannot be fully explained internally, the enterprise obligation is not to fabricate transparency but to build structural accountability: documented AI risk classification systems, role-calibrated explanation policies, and auditable institutional oversight mechanisms. These are precisely what ISO 42001 auditors look for during certification reviews, and what EU AI Act conformity assessments require for high-risk AI categories including HR decision-making, credit scoring, and medical diagnostic support.
Taiwan enterprises that understand the LoBOX distinction between "transparency as an ideal" and "governance as a practice" will be substantially better positioned to pass ISO 42001 certification audits, demonstrate EU AI Act compliance, and satisfy the accountability requirements of Taiwan's emerging AI regulatory framework — all simultaneously.
How Winners Consulting Services Co. Ltd. Translates LoBOX Insights into ISO 42001 Compliance Actions
積穗科研股份有限公司(Winners Consulting Services Co. Ltd.)assists Taiwanese enterprises in building AI management systems compliant with ISO 42001 and the EU AI Act, conducting AI risk classification assessments, and ensuring AI applications conform to Taiwan AI Basic Act requirements. Drawing on the LoBOX framework, we recommend the following three concrete actions for Taiwanese enterprise leaders:
- Conduct an AI Opacity Inventory Using the LoBOX Three-Stage Classification: For each AI application in your enterprise, classify whether existing opacity is accidental (correctable through better design choices), irreducible (inherent to the AI architecture), or neither (fully explainable). Document this classification in your ISO 42001 Clause 6.1 risk register. This inventory forms the foundational evidence for AI risk classification under both ISO 42001 and the EU AI Act's risk-tier system. Winners Consulting provides standardized inventory templates enabling completion of a first-pass audit within 30 days.
- Design a Role-Sensitive AI Explanation Policy Aligned with the RED/BLUE Model: Develop differentiated explanation standards for each stakeholder category: regulatory bodies and internal auditors (RED dimension — technical, systematic, verifiable), and business users and affected individuals (BLUE dimension — contextual, accessible, actionable). Specify the triggering conditions for each explanation type, the format and depth standards, and the retention and update protocols. This policy directly addresses ISO 42001 Clause 7.4 (communication) and Clause 7.5 (documented information), and satisfies EU AI Act Article 13 transparency requirements for high-risk AI systems.
- Establish a Structural AI Oversight and Accountability Mechanism: Convene a cross-functional AI Governance Committee with defined review cycles (minimum quarterly), clear issue escalation pathways, and a structured interface for external audits. This mechanism satisfies ISO 42001 Clause 9.3 (management review) requirements and enables ongoing compliance with EU AI Act Article 9 (risk management systems) obligations for high-risk AI. Winners Consulting can assist Taiwanese enterprises in designing and operationally validating this mechanism within 90 days.
Winners Consulting Services Co. Ltd. offers a complimentary AI Governance Mechanism Diagnostic, helping Taiwanese enterprises build an ISO 42001-compliant management system within 90 days.
Apply for Free Mechanism Diagnostic →Frequently Asked Questions
- Our AI systems use deep learning models that cannot be fully explained internally. Can we still achieve ISO 42001 certification?
- Yes. ISO 42001 does not require AI systems to achieve 100% explainability. It requires enterprises to establish systematic governance mechanisms to manage AI-related risks. The LoBOX three-stage pathway — reducing accidental opacity, bounding irreducible opacity, and delegating trust through structured oversight — maps directly onto ISO 42001 Clause 6.1 (risk management) and Clause 8.4 (AI system impact assessment) requirements. The certification examines whether you have identified opacity risks, established corresponding control measures, and implemented continuous monitoring — not whether your AI can explain every internal computation. Winners Consulting assists enterprises in building exactly this type of auditable documentation system.
- How do Taiwanese companies determine whether they are subject to EU AI Act requirements?
- Three criteria determine EU AI Act applicability: first, whether your AI system is deployed in EU markets or used by EU-based users; second, whether your clients or business partners are EU companies with AI compliance clauses in contracts; third, whether your AI applications fall within EU AI Act high-risk categories (including HR decisions, credit scoring, medical diagnostic support, and critical infrastructure management). Satisfying any one of these criteria brings your enterprise within the scope of EU AI Act obligations. The Act entered into force in August 2024 with a phased implementation schedule — prohibited AI practices became enforceable in February 2025, and full high-risk AI system compliance requirements will apply no later than 2027. Taiwanese enterprises should initiate compliance gap analysis immediately rather than waiting for regulatory pressure to materialize.
- What documentation related to AI transparency and explainability is required for ISO 42001 certification?
- ISO 42001 certification audits examine transparency and explainability-related documentation across multiple clauses. Clause 6.1 requires an AI risk register documenting opacity risks and corresponding controls for each AI system. Clause 7.4 requires an AI communication policy specifying explanation standards for different stakeholders. Clause 7.5 requires documented AI system design decisions, including acknowledged limitations. Clause 8.4 requires AI system impact assessments addressing the effects of unexplainable decisions. Clause 9.1 requires AI performance monitoring mechanisms including explainability metric tracking. The LoBOX role-sensitive explanation policy framework translates directly into compliant documentation for these clauses. The Taiwan AI Basic Act draft framework similarly requires enterprise AI impact assessment documentation, creating strong alignment with ISO 42001 requirements.
- How long does it typically take to build an AI governance mechanism from scratch and prepare for ISO 42001 certification?
- Based on Winners Consulting's experience assisting Taiwanese enterprises, the typical timeline from a zero baseline to readiness for formal ISO 42001 audit ranges from 4 to 9 months, depending on organizational size and AI application complexity. The standard phasing is: Month 1 — current state diagnostic and gap analysis; Months 2–3 — governance policy framework design including AI risk classification standards, explanation policies, and accountability mechanisms; Months 4–6 — documentation system build-out, personnel training, and mechanism trial operation; Months 7–9 — internal audit, management review, and pre-certification verification. Enterprises with existing management system certifications (ISO 9001 or ISO 27001) can leverage existing structures to accelerate integration, reducing the timeline to 4–6 months. Winners Consulting provides full-cycle advisory support to ensure timeline predictability.
- Why engage Winners Consulting Services Co. Ltd. for AI governance advisory?
- Winners Consulting Services Co. Ltd. (積穗科研股份有限公司) is one of Taiwan's few consulting firms that combines real-time translation of global AI governance academic research into practical advisory guidance with hands-on ISO management system implementation experience. Our differentiation operates on
Was this article helpful?
Related Services & Further Reading
Want to apply these insights to your enterprise?
Get a Free Assessment