ai

Insight: Evaluating Trustworthiness in AI: Risks, Metrics, and Applic

Published
Share

Winners Consulting Services Co. Ltd. (積穗科研股份有限公司), Taiwan's expert in AI Governance, urges enterprise leaders to recognize a critical finding from a 2025 international study with 39 citations: AI trustworthiness cannot be reduced to a single score. Across healthcare, finance, and public administration, organizations that fail to systematically manage the trade-offs between fairness, transparency, privacy, and security will face compounding risks under ISO 42001, the EU AI Act, and Taiwan's AI Basic Act.

Paper Citation: Evaluating Trustworthiness in AI: Risks, Metrics, and Applications Across Industries (Aleksandra Nastoska, Bojana Jancheska, Maryan Rizinski, OpenAlex — AI Governance, 2025)
Original Paper: https://doi.org/10.3390/electronics14132717

Read Original Paper →

About the Authors and This Research

This paper was co-authored by Aleksandra Nastoska, Bojana Jancheska, and Maryan Rizinski, researchers affiliated with academic institutions in North Macedonia. While each author currently holds an h-index of 1, the paper has accumulated 39 citations since its 2025 publication — including 1 high-impact citation — signaling rapid uptake within the AI governance research community. The paper's influence stems from its rare combination of breadth and practical grounding: it systematically compares the NIST AI Risk Management Framework (AI RMF), ISO/IEC 42001, and the AI Trust Framework and Maturity Model (AI-TMM), then stress-tests these frameworks against real-world case studies in healthcare, financial services, and autonomous systems. For Taiwanese enterprise executives evaluating their AI governance toolkit, this research offers one of the most comprehensive cross-framework comparison maps currently available in academic literature.

AI Trustworthiness Is a Multi-Dimensional Trade-Off Management Problem, Not a Checkbox

The research addresses two central questions: how can trust in AI systems be systematically measured across the AI lifecycle, and what trade-offs emerge when optimizing for different trustworthiness dimensions simultaneously? The findings challenge the common assumption that organizations can simply "pass" an AI audit and move on. Instead, the authors demonstrate that trustworthiness requires continuous, lifecycle-spanning governance — and that improving one dimension frequently creates tension in another.

Core Finding One: Structural Tension Exists Among the Four Trustworthiness Dimensions

The study identifies four foundational dimensions of AI trustworthiness — fairness, transparency, privacy, and security — and demonstrates that these dimensions are not independently optimizable. A model calibrated for maximum fairness across demographic groups may sacrifice overall prediction accuracy. A system designed for full explainability may inadvertently expose sensitive training data, creating privacy risks. A highly secure, sandboxed system may resist the transparency audits that regulators increasingly demand. These are not edge cases; they are structural properties of AI system design. For Taiwanese enterprises, this finding has direct implications: organizations operating AI systems in regulated industries cannot defer governance decisions to IT teams alone. Legal, ethical, and business stakeholders must be formally integrated into AI decision-making processes — a requirement explicitly codified in both ISO 42001's organizational context requirements and the EU AI Act's Article 9 risk management obligations.

Core Finding Two: No Single Framework Solves Every Governance Challenge

The comparative analysis of NIST AI RMF, ISO/IEC 42001, and AI-TMM reveals distinct strengths and gaps in each framework. NIST AI RMF excels at risk identification and process structure but lacks prescriptive quantitative metrics. ISO 42001 provides a certifiable management system architecture — making it the most valuable framework for organizations that need to demonstrate compliance to customers, regulators, or international procurement bodies. AI-TMM offers a maturity-level progression model suited to organizations in the early stages of governance capability building. The research concludes that no single framework provides a complete solution, and that adaptive, interdisciplinary governance structures are essential as AI technologies continue to evolve. For Taiwan's export-oriented manufacturers and financial service providers, this means that ISO 42001 certification should be treated as a foundation layer, not a final destination — it must be complemented by ongoing risk monitoring aligned with EU AI Act requirements and Taiwan's AI Basic Act principles.

What This Research Means for Taiwan's AI Governance Landscape in 2025

Taiwan's AI governance environment is undergoing rapid transformation on three parallel tracks. First, Taiwan's AI Basic Act (人工智慧基本法) has established a risk-based regulatory framework that mirrors the EU AI Act's risk classification logic — creating a dual compliance imperative for Taiwan-based enterprises serving European markets. Second, ISO 42001 certification is becoming an implicit market access requirement: European public procurement standards and enterprise vendor qualification processes are increasingly requiring suppliers to demonstrate structured AI governance capabilities. Third, the EU AI Act's full enforcement timeline — with high-risk AI system obligations taking effect from August 2026 — means that Taiwanese exporters have a narrowing window to build compliant governance infrastructure.

The research findings map directly onto three high-priority scenarios for Taiwanese enterprises:

Scenario One — Smart Manufacturing Quality Control: AI-driven defect detection systems must be evaluated not only for accuracy but for fairness across product lines and production shifts. Systematic bias in training data can create liability exposure that neither quality teams nor IT departments are currently equipped to detect under existing governance structures.

Scenario Two — Fintech Credit Scoring: Under EU AI Act Annex III classification, credit scoring constitutes a high-risk AI application, requiring documented explainability, bias auditing, and human oversight mechanisms. Taiwan's Financial Supervisory Commission is expected to align domestic AI guidelines with these international standards in its upcoming regulatory updates.

Scenario Three — Healthcare AI Decision Support: Medical AI systems face the highest multi-dimensional compliance burden: simultaneous requirements for clinical reliability, fairness across patient demographics, data privacy under Taiwan's Personal Data Protection Act, and the documentation standards mandated by ISO 42001.

How Winners Consulting Services Helps Taiwanese Enterprises Act on These Findings

Winners Consulting Services Co. Ltd. (積穗科研股份有限公司) helps Taiwanese enterprises build AI management systems compliant with ISO 42001 and the EU AI Act, conduct structured AI risk classification assessments, and align AI deployments with the principles of Taiwan's AI Basic Act. Based on the core findings of this research, we recommend the following three immediate actions:

  1. Establish a Multi-Dimensional AI Trustworthiness Baseline: Using the four dimensions identified in the research — fairness, transparency, privacy, and security — conduct a quantified baseline assessment of all active AI systems. Assign measurable metrics to each dimension (e.g., Demographic Parity Difference for fairness, SHAP-based explainability scores for transparency) and document the trade-off decisions made for each system. This baseline becomes the foundation of your ISO 42001 Gap Analysis and your defense documentation under EU AI Act Article 9.
  2. Design Risk-Tiered Governance Strategies by Industry Application: Not all AI applications carry equal governance weight. Cross-reference your AI application inventory against EU AI Act Annex III risk classifications and Taiwan AI Basic Act risk-tiering principles. Allocate governance resources proportionally — high-risk applications require human oversight protocols, continuous monitoring, and bias auditing; limited-risk applications may only require transparency notices. This prevents both over-compliance resource waste and under-compliance regulatory exposure.
  3. Constitute a Cross-Disciplinary AI Governance Committee Within 90 Days: The research's most actionable conclusion is that interdisciplinary collaboration is non-negotiable for robust AI governance. Within 90 days, establish a governance committee with representation from technology, legal, ethics, and business units. Assign this committee formal authority to approve AI deployment decisions, review incident reports, and update AI use policies on a quarterly basis. This structure satisfies ISO 42001's organizational roles and responsibilities requirements and positions the enterprise for the stakeholder consultation obligations embedded in the EU AI Act.

Winners Consulting Services Co. Ltd. offers a complimentary AI Governance Mechanism Diagnostic, helping Taiwanese enterprises establish an ISO 42001-compliant management system within 90 days.

Apply for Free Governance Diagnostic →

Frequently Asked Questions

How can enterprises measure AI trustworthiness in quantitative terms rather than relying on qualitative judgment?
Quantifying AI trustworthiness requires assigning measurable metrics to each of the four core dimensions. For fairness, use statistical parity measures such as Demographic Parity Difference or Equal Opportunity Difference. For transparency, apply explainability scoring tools such as SHAP values or LIME outputs. For privacy, evaluate differential privacy budgets (epsilon values) and data minimization compliance ratios. For security, conduct regular adversarial robustness tests and track mean time to detect anomalies. Critically, these metrics must be measured at each stage of the AI lifecycle — not just at initial deployment. ISO 42001 requires documented performance evaluation processes, and Winners Consulting Services can help enterprises design monitoring dashboards that satisfy both internal governance and external audit requirements.
How do Taiwanese companies determine whether their AI applications fall under the EU AI Act's high-risk classification?
The EU AI Act's Annex III defines eight high-risk application domains: biometric identification and categorization, critical infrastructure management, education and vocational training, employment management (including CV screening and performance monitoring), access to essential services (including credit scoring and insurance), law enforcement, migration management, and administration of justice and democratic processes. Taiwanese enterprises whose products or services are used by EU-based organizations, or who export AI-enabled products to the EU market, fall within the Act's jurisdictional scope regardless of where the AI system was developed. Winners Consulting Services provides a structured EU AI Act applicability assessment to help enterprises map their AI portfolio against Annex III classifications and prioritize compliance investments accordingly.
What does ISO 42001 certification actually require, and how does it relate to the EU AI Act and Taiwan's AI Basic Act?
ISO 42001 is the world's first internationally recognized AI management system standard. It requires organizations to establish, implement, maintain, and continuously improve a governance structure covering: AI policy definition, organizational context analysis, stakeholder needs assessment, AI-specific risk and impact assessment, resource allocation, competence development, operational controls, performance evaluation, and management review. Its relationship to the EU AI Act is complementary: ISO 42001 certification provides documented evidence of a functioning risk management system, directly satisfying the EU AI Act's Article 9 requirements for high-risk AI systems. Taiwan's AI Basic Act aligns with ISO 42001's risk-tiering logic, making certification a strategically efficient path to multi-jurisdictional compliance. For Taiwanese enterprises, ISO 42001 certification signals governance maturity to international partners, regulators, and customers simultaneously.
What is the realistic timeline for building ISO 42001 compliance, and what are the key milestones?
Based on Winners Consulting Services' implementation experience with Taiwanese mid-sized enterprises, the end-to-end journey from initial diagnostic to certification readiness typically spans 6 to 9 months across four phases. Phase One (Weeks 1–4): Current state diagnostic and gap analysis — inventory all AI applications, map existing governance controls, and identify gaps against ISO 42001 clause requirements. Phase Two (Weeks 5–12): Policy and process design — develop AI use policy, risk assessment procedures, and incident response plans. Phase Three (Weeks 13–20): System implementation and training — deploy monitoring metrics, establish document management systems, and train cross-functional governance committee members. Phase Four (Weeks 21–36): Internal audit and certification preparation — conduct mock audits, close identified gaps, and submit for third-party certification. Enterprises that begin with a structured diagnostic accelerate this timeline significantly.
Why should Taiwanese enterprises choose Winners Consulting Services for AI governance advisory?
Winners Consulting Services Co. Ltd. (積穗科研股份有限公司) is one of Taiwan's few consulting organizations with demonstrated capability across ISO 42001 implementation, EU AI Act compliance planning, and Taiwan AI Basic Act interpretation. Our advisory approach is grounded in academic-grade research synthesis — as demonstrated by this paper evaluation series — ensuring that governance recommendations are informed by the latest international evidence rather than generic compliance templates. Our cross-disciplinary team integrates technical, legal, ethical, and business perspectives, directly addressing the interdisciplinary collaboration gap identified in this research. We deliver milestone-based engagements that produce quantifiable governance outputs within 90 days, rather than deferring all outcomes to final certification. We also provide long-term regulatory monitoring services, ensuring that clients' governance frameworks remain adaptive as AI regulations across the EU, Taiwan, and Asia-Pacific continue to evolve.