ai

Insight: Privacy Ethics Alignment in AI: A Stakeholder-Centric Framew

Published
Share
========================= ```html

Winners Consulting Services Co. Ltd. (積穗科研股份有限公司), Taiwan's expert in AI Governance, highlights a critical finding from a landmark 2025 study: the greatest risk in enterprise AI privacy governance is not a technical failure — it is the fundamental gap between what different stakeholder groups expect from AI systems. The newly proposed PEA-AI (Privacy-Ethics Alignment in AI) model, published in the peer-reviewed journal Systems, demonstrates that young digital citizens, parents and educators, and AI professionals each hold fundamentally different privacy expectations, creating governance blind spots that directly threaten compliance with ISO 42001, the EU AI Act, and Taiwan's AI Basic Act.

Paper Citation: Privacy Ethics Alignment in AI: A Stakeholder-Centric Framework for Ethical AI (Ankur Barthwal, Molly Campbell, Ajay Kumar Shrestha, Systems — AI Governance & Ethics, 2025)
Original Paper: https://doi.org/10.3390/systems13060455

Read Original Paper →

About the Authors and This Research

This paper is co-authored by Ankur Barthwal, Molly Campbell, and Ajay Kumar Shrestha. Ankur Barthwal brings an h-index of 5 and 62 cumulative citations to the table, with a research focus bridging AI ethics, privacy governance, and digital citizenship. His methodological approach — combining grounded theory with structured surveys, qualitative interviews, and focus groups — produces findings that are both academically rigorous and practically applicable. Co-authors Campbell and Shrestha have sustained research interests in ethical AI system design and human-centered governance.

The paper has accumulated 10 citations since publication in 2025, including 1 high-impact citation, indicating early traction in AI governance policy circles. It is published in Systems, an MDPI peer-reviewed journal covering systems science, organizational design, and complex systems management — a publication whose editorial standards align well with the interdisciplinary demands of AI governance research. The grounded theory methodology ensures that findings emerge from real stakeholder experiences rather than theoretical assumptions, making the research particularly relevant for practitioners designing governance frameworks.

The PEA-AI Model: Redefining AI Privacy Governance as Dynamic Multi-Stakeholder Negotiation

The central contribution of this research is deceptively simple yet profoundly challenging to implement: effective AI privacy governance cannot be reduced to a compliance checklist or a privacy policy document. The PEA-AI model reconceptualizes privacy decision-making as a continuous, dynamic negotiation process among stakeholders with divergent interests, knowledge levels, and power positions. For enterprise governance professionals, this is not an abstract academic insight — it is a direct diagnosis of why so many AI governance implementations fail to satisfy regulators, users, and organizational leadership simultaneously.

Core Finding 1: Three Stakeholder Groups Hold Irreconcilably Different Privacy Priorities

The research identifies three primary stakeholder clusters with distinct and often incompatible privacy orientations. Young digital citizens prioritize autonomy and digital agency — they want control over their data but often lack the risk awareness to exercise that control effectively. Parents and educators prioritize oversight mechanisms and AI literacy education, approaching privacy from a protective standpoint that can conflict with younger users' desire for independence. AI professionals are caught between ethical design principles and system performance pressures, frequently making transparency trade-offs in response to commercial imperatives. The study documents significant gaps in transparency perception and digital literacy across all three groups. Critically, the research finds that what organizations believe constitutes "sufficient transparency" rarely matches what end users actually understand — a gap that has direct, measurable implications for EU AI Act compliance under Article 13's transparency and user notification obligations.

Core Finding 2: The PEA-AI Model Provides a Six-Dimension Scalable Framework for Inclusive Privacy Governance

The PEA-AI model operationalizes stakeholder-driven privacy governance across six analytical dimensions: data ownership awareness, trust-building mechanisms, transparency design, parental and guardian mediation channels, educational support systems, and risk-benefit perception. The model is specifically designed to be scalable and adaptive, with a particular focus on youth-centered governance contexts. However, the research explicitly frames this scalability as applicable beyond educational or youth-specific settings — the same multi-stakeholder negotiation logic applies to enterprise environments where AI systems interact with employees, customers, supply chain partners, and regulators simultaneously. The model's emphasis on dynamic negotiation rather than static policy definition represents a significant conceptual advance for organizations seeking to build governance frameworks that remain valid as AI systems evolve.

Implications for Taiwan AI Governance: ISO 42001, EU AI Act, and Taiwan's AI Basic Act All Demand Stakeholder Inclusion

For Taiwanese enterprise executives, the PEA-AI model's findings are not merely academic observations — they constitute a practical governance warning that maps directly onto the three most significant AI regulatory frameworks currently shaping Taiwan's business environment.

ISO 42001:2023, the world's first international standard for AI management systems, explicitly requires organizations to identify and respond to the needs and expectations of relevant interested parties throughout the AI system lifecycle (Clause 4.2) and to conduct systematic risk assessments that include privacy and human rights dimensions (Clause 6.1). The six-dimension analytical framework proposed by the PEA-AI model provides a directly applicable methodology for satisfying ISO 42001's stakeholder analysis requirements. Organizations pursuing ISO 42001 certification in Taiwan will find that the research offers a structured, defensible approach to documenting stakeholder privacy expectations — a component that many first-time certification candidates underestimate.

The EU AI Act (2024), which came into full legal force across the European Union and affects any organization deploying AI systems that interact with EU residents, imposes mandatory transparency obligations (Article 13), fundamental rights impact assessments (FRIA) for high-risk AI applications, and user notification requirements that presuppose effective communication across diverse literacy levels. The transparency perception gap documented in the PEA-AI research directly illuminates the most common EU AI Act compliance failure mode: organizations that believe their disclosure documents are adequate but whose end users demonstrably lack comprehension. For Taiwanese exporters, technology firms, and multinational subsidiaries operating in EU markets, this gap represents a significant and often unrecognized compliance liability.

Taiwan's AI Basic Act (2024), enacted by the Legislative Yuan and entered into force in August 2024, establishes foundational principles for AI governance in Taiwan. Articles 7 and 9 of the Act specifically require that AI applications protect citizens' privacy rights, establish transparent accountability mechanisms, and promote digital inclusion across diverse population groups. The PEA-AI model's youth-centered adaptive governance framework resonates directly with the Act's emphasis on inclusivity and human-centered AI development. Taiwan enterprises subject to this legislation must demonstrate not only that they have privacy policies in place, but that those policies are meaningfully accessible and understandable to the populations they serve.

The convergent demands of these three frameworks — ISO 42001, the EU AI Act, and Taiwan's AI Basic Act — create a clear imperative for Taiwanese enterprises: stakeholder-driven AI privacy governance is no longer optional governance good practice. It is a verifiable compliance requirement with documentary and audit trail expectations attached.

How Winners Consulting Services Co. Ltd. Translates Research Insights into Governance Action

積穗科研股份有限公司(Winners Consulting Services Co. Ltd.)provides end-to-end AI governance consulting services for Taiwanese enterprises, from initial gap assessment through ISO 42001 certification and ongoing compliance monitoring. Drawing on the PEA-AI model's research findings, our consulting methodology incorporates the following specific action pathways:

  1. Stakeholder Privacy Expectation Mapping (ISO 42001 Clause 4.2 Alignment): Using the PEA-AI model's six-dimension analytical framework as a structured methodology, we systematically identify all stakeholder groups relevant to your AI applications — including internal employees, customers, supply chain partners, and regulatory bodies — and assess their differentiated expectations regarding data ownership, transparency, and trust. The output is a stakeholder privacy risk matrix that directly satisfies ISO 42001 Clause 4.2 documentation requirements and provides the evidential foundation for your AI management system certification application.
  2. Transparency Gap Diagnosis and User Notification Mechanism Design (EU AI Act Article 13 Alignment): We conduct structured assessments of your existing AI applications' transparency mechanisms, measuring the gap between organizational disclosure intentions and actual end-user comprehension across different literacy levels and stakeholder groups. Based on this diagnosis, we design EU AI Act-compliant user notification systems and privacy communication frameworks that ensure your AI governance documentation satisfies both legal review and practical user understanding criteria — the dual test that many organizations currently fail.
  3. Dynamic Multi-Stakeholder AI Privacy Governance Structure (Taiwan AI Basic Act Articles 7 & 9 Alignment): We assist enterprises in designing AI Privacy Governance Committees or cross-functional consultation mechanisms that embed legal, security, business, and customer service perspectives into a regular AI privacy policy review cycle. This institutionalizes the "dynamic negotiation" governance philosophy of the PEA-AI model, ensuring your AI governance framework evolves alongside your AI systems rather than becoming obsolete after its initial design.

Winners Consulting Services Co. Ltd. offers a complimentary AI Governance Mechanism Diagnostic, helping Taiwanese enterprises establish an ISO 42001-compliant management system within 90 days.

Apply for Free Governance Diagnostic →

Frequently Asked Questions

Our company already has a privacy policy. Why do we need a separate stakeholder-driven AI privacy governance process?
A static privacy policy document is insufficient for AI governance compliance. The PEA-AI research demonstrates that even organizations with formal privacy documentation face fundamental comprehension gaps across stakeholder groups — users do not understand what has been disclosed, and different stakeholder groups hold incompatible expectations that a single document cannot resolve. ISO 42001:2023 Clause 4.2 explicitly requires ongoing identification and response to stakeholder needs, which goes significantly beyond one-time policy publication. AI systems change continuously, and governance frameworks must include dynamic review mechanisms that respond to evolving stakeholder expectations. A traditional privacy policy addresses legal liability; a PEA-AI-aligned governance framework addresses actual governance risk.
Which EU AI Act requirements are Taiwanese companies most likely to fail in a compliance audit?
Based on Winners Consulting Services Co. Ltd.'s diagnostic experience, the most commonly deficient areas are Article 13 transparency and user notification obligations, and Article 9 risk management system documentation. Taiwanese companies frequently assume that technical privacy measures and legal privacy policies constitute sufficient transparency, without testing whether end users actually comprehend the AI system's data practices. The transparency perception gap documented in the PEA-AI research is particularly acute in B2C contexts — e-commerce platforms, digital health applications, and customer service AI deployments — where user literacy varies widely. We recommend user comprehension testing as a mandatory step in EU AI Act compliance preparation, not merely legal text review.
What specific documentation is required for ISO 42001 certification in relation to AI privacy governance?
ISO 42001:2023 requires the following documentation with direct AI privacy governance relevance: Clause 4.2 Interested Parties Analysis (documenting stakeholder privacy expectations across identified groups), Clause 6.1 AI Risk Assessment Report (including privacy risk dimensions), Clause 8.4 AI System Impact Assessment Records, and Clause 9.1 Monitoring and Measurement Procedures. The PEA-AI model's six-dimension framework — data ownership, trust, transparency, mediation channels, educational support, and risk-benefit perception — provides a structured input methodology for Clause 4.2 documentation. EU AI Act Fundamental Rights Impact Assessments (FRIA) can be integrated with ISO 42001 risk assessment documentation to eliminate redundant work. Winners Consulting provides complete certification-ready document templates and guided consultation throughout the process.
How long does it realistically take to build an

Was this article helpful?

Share

Related Services & Further Reading

Want to apply these insights to your enterprise?

Get a Free Assessment