ai

Insight: Co-Producing AI: Toward an Augmented, Participatory Lifecycl

Published
Share
=============================================================

Winners Consulting Services Co. Ltd. (積穗科研股份有限公司), Taiwan's expert in AI Governance, draws your attention to a critical 2025 finding: ethical guidelines and algorithmic fairness tools alone cannot prevent AI systems from disproportionately harming marginalized communities — the entire AI production pipeline must be fundamentally re-architected around co-production, diversity, equity, and inclusion (DEI), with five interconnected lifecycle phases that embed participatory governance from the very first moment a problem is framed. For Taiwan enterprises pursuing ISO 42001 certification or EU AI Act compliance, this is not an academic debate — it is the next concrete compliance threshold.

Paper Citation: Co-Producing AI: Toward an Augmented, Participatory Lifecycle (Rashid Mushkani, Hugo Berard, Toumadher Ammar, arXiv — AI Governance & Ethics, 2025)
Original Paper: https://doi.org/10.1609/aies.v8i2.36674

Read Original Paper →

About the Authors and This Research

This paper is led by Rashid A. Mushkani, whose h-index of 7 and 124 cumulative citations signal a researcher with a growing and substantive presence in the fields of participatory AI, design justice, and algorithmic fairness. Co-authors Hugo Berard and Toumadher Ammar contribute multidisciplinary perspectives that push the research beyond pure technical analysis into the realm of organizational learning theory and social science.

The paper was published in 2025 at AIES — the AAAI/ACM Conference on AI, Ethics, and Society — widely regarded as one of the most rigorous peer-reviewed venues for AI governance and social impact research. The empirical grounding of this work is particularly notable: the proposed framework is not derived from theoretical modeling alone, but is informed by four real-world multidisciplinary workshops, lending the five-phase lifecycle a practical credibility that purely abstract frameworks lack. The paper has already received 1 citation since its 2025 publication, marking the beginning of its influence on academic discourse around participatory AI governance.

The Core Insight: Fixing AI Bias Requires Rebuilding the Pipeline, Not Patching It

The central argument of this paper is as clear as it is challenging: the reason AI systems continue to harm culturally marginalized groups — despite years of ethical guidelines, algorithmic fairness research, and responsible AI principles — is that the production pipeline itself is fundamentally exclusionary. You cannot solve a structural inclusion problem with a technical patch applied at the end of the development process.

Finding 1: The Existing AI Lifecycle Systematically Excludes Those Most Affected

Drawing on Design Justice theory — a framework that centers the voices of those most impacted by design decisions — the authors argue that current AI development concentrates authority in the hands of technical teams and a narrow set of organizational decision-makers. The people whose lives are most directly shaped by algorithmic outputs are consistently absent from the rooms where design choices are made. This exclusion is not incidental; it is structural. And because it is structural, it produces harms that are also structural — biases that persist even after post-hoc fairness corrections are applied. For Taiwan enterprises, this finding is directly relevant to ISO 42001's Clause 6.4, which mandates the identification of stakeholder needs across the AI system lifecycle. An AI management system that cannot document who was consulted, how, and at which stage of development, has a governance gap that no technical audit can close.

Finding 2: The Augmented AI Lifecycle — Five Phases That Change Everything

The paper's primary contribution is a concrete, actionable redesign of the AI production pipeline. The Augmented AI Lifecycle consists of five interconnected co-production phases:
Co-Framing: Collaboratively defining the problem scope and ethical boundaries with diverse stakeholders before any technical work begins.
Co-Design: Integrating multidisciplinary perspectives — including social scientists, affected community representatives, ethicists, and domain experts — into the system architecture design process.
Co-Implementation: Extending participation beyond technical teams to include non-technical stakeholders in key implementation decisions.
Co-Deployment: Ensuring that multiple perspectives review the system's contextual fit and potential impact before it goes live.
Co-Maintenance: Establishing ongoing, distributed monitoring and iterative knowledge exchange mechanisms that continuously surface and address emerging harms.
These five phases are grounded in Expansive Learning Theory, which conceptualizes knowledge as something that must flow iteratively across different roles and disciplines — rather than being handed down from technical authorities to passive recipients. The framework was validated through four multidisciplinary workshops, giving it an empirical foundation that distinguishes it from purely theoretical governance models.

Finding 3: Distributed Authority Is the Structural Prerequisite for Sustainable AI Governance

A recurring theme across all four workshops was the concept of distributed authority — the recognition that sustainable AI governance cannot be delegated to a single department, role, or team. Whether it is the IT department, the legal team, or the compliance function, no single organizational unit has the full context, the affected community relationships, or the multidisciplinary knowledge required to govern AI responsibly on its own. This directly challenges the governance model that many Taiwan enterprises currently operate: one where AI compliance is treated as a checkbox exercise owned by a single function. The paper calls for institutionalized cross-functional, cross-disciplinary co-responsibility mechanisms — a structural requirement that aligns closely with ISO 42001's emphasis on top management accountability and cross-organizational AI governance roles.

What This Means for Taiwan Enterprise AI Governance

The implications of this research for Taiwan enterprises are concrete and time-sensitive. Participatory governance is no longer a soft, values-based aspiration — it is rapidly becoming a hard compliance requirement embedded in the frameworks that govern AI deployment globally.

ISO 42001:2023, the world's first international standard for AI management systems, explicitly requires organizations to identify and document stakeholder needs (Clause 6.4) and to manage risks across the full AI system lifecycle (Clause 8.4). The paper's five-phase co-production lifecycle maps directly onto these requirements: Co-Framing addresses Clause 6.4's stakeholder identification mandate; Co-Maintenance addresses Clause 8.4's continuous lifecycle risk management requirement. Taiwan enterprises pursuing ISO 42001 certification that cannot produce documented evidence of multi-stakeholder participation will face audit gaps that cannot be resolved after the fact.

The EU AI Act, which entered into force in 2024 with full compliance requirements for high-risk AI systems taking effect in 2026, embeds participatory governance obligations throughout its risk management framework. Article 9 requires systematic risk management systems that account for differential impacts on different user groups. Article 13 mandates transparency obligations that implicitly require organizations to have engaged with affected communities in order to understand what disclosure is meaningful to them. Taiwan enterprises with EU market exposure — including manufacturers, financial service providers, and technology exporters — should treat the paper's co-production framework as a practical design template for EU AI Act readiness.

Taiwan's AI Basic Act (currently advancing through the Legislative Yuan as of 2024-2025) similarly emphasizes human rights protection and social impact assessment as foundational principles for AI governance. The legislation's spirit aligns closely with the Design Justice framework that underpins this paper, and regulatory guidance under the Act is expected to include more specific requirements for stakeholder consultation in high-impact AI deployments in the 2025-2026 timeframe. Taiwan enterprises that build participatory governance capabilities now will be structurally ahead of this regulatory curve.

How Winners Consulting Services Co. Ltd. Helps Taiwan Enterprises Act on These Findings

Winners Consulting Services Co. Ltd. (積穗科研股份有限公司) helps Taiwan enterprises build AI management systems that meet the requirements of ISO 42001 and the EU AI Act, conduct AI risk classification assessments, and ensure that AI applications align with Taiwan's AI Basic Act. In response to the participatory governance imperatives identified in this research, we recommend the following three concrete actions:

  1. Conduct an AI Stakeholder Mapping Exercise: For each active AI project, systematically identify the internal and external groups who are affected by algorithmic outputs, document their needs and concerns, and establish a formal record of how their input was incorporated into design and deployment decisions. This directly satisfies ISO 42001 Clause 6.4 and builds the evidentiary foundation needed for EU AI Act Article 9 compliance.
  2. Redesign Your AI Development Lifecycle Around Co-Production Checkpoints: Integrate governance checkpoints aligned with the paper's five co-production phases into your existing AI development process — whether agile, waterfall, or hybrid. Embed non-technical stakeholder review at Co-Framing and Co-Design stages, and establish Co-Maintenance feedback loops that continuously surface emerging risks. This does not require replacing your existing SDLC; it requires augmenting it with structured participatory governance gates.
  3. Add Inclusion Dimensions to Your AI Risk Classification Framework: Expand your AI risk assessment tools to explicitly score each AI system on two dimensions currently absent from most frameworks: (a) diversity of affected stakeholder groups, and (b) completeness of participation mechanisms. This ensures your risk classification framework addresses not only technical failure risks but also social equity risks — fully aligning with Taiwan's AI Basic Act human rights protection requirements and EU AI Act differential impact assessment obligations.

Winners Consulting Services Co. Ltd. offers a complimentary AI Governance Mechanism Diagnostic, helping Taiwan enterprises establish an ISO 42001-aligned AI management system within 90 days.

Request Your Free Diagnostic →

Frequently Asked Questions

What does "participatory AI governance" actually require enterprises to change in practice?
Participatory AI governance requires enterprises to shift stakeholder engagement from a post-deployment public relations function to a pre-design structural requirement. In practice, this means three concrete changes: first, establishing a stakeholder identification process at project initiation (not after go-live); second, building non-technical representation into design review processes; and third, creating systematic feedback collection mechanisms in the maintenance phase. These changes do not require dismantling existing development processes — they require adding governance checkpoints at defined lifecycle stages. Most Taiwan enterprises can establish the basic structural framework within the first 90 days of an ISO 42001 implementation engagement, with continuous refinement thereafter.
Does the EU AI Act apply to Taiwan enterprises that do not have offices in Europe?
Yes — the EU AI Act applies on the basis of where AI system outputs are used, not where the developing organization is headquartered. If a Taiwan enterprise's AI system produces outputs that are accessed or relied upon by users or customers in EU member states, the Act's obligations may apply, particularly for high-risk AI system categories. This includes not only software products but also AI-powered services accessed remotely. Taiwan exporters, financial institutions serving international clients, and technology service providers should conduct an EU AI Act applicability assessment as a priority. The participatory co-production framework proposed in this paper directly addresses the differential impact assessment and risk management system requirements of EU AI Act Articles 9 and 13.
How does this paper's framework relate to ISO 42001 certification requirements?
ISO 42001:2023 is the world's first international standard for AI management systems, and its core requirements align closely with the co-production framework proposed in this paper. Clause 6.4 requires organizations to identify the needs and expectations of interested parties across the AI system lifecycle — directly corresponding to the Co-Framing and Co-Design phases. Clause 8.4 requires ongoing risk management across the full system lifecycle — directly corresponding to the Co-Maintenance phase. In effect, organizations that rigorously implement the paper's five-phase co-production lifecycle will generate the documented evidence — stakeholder registers, participation records, feedback loop outputs — that ISO 42001 auditors look for. The EU AI Act's

Was this article helpful?

Share

Related Services & Further Reading

Want to apply these insights to your enterprise?

Get a Free Assessment