ai

Insight: Towards the Socio-Algorithmic Construction of Fairness: The

Published
Share
read-original-btn">Read Original Paper →

About the Authors and This Research

Mateusz Dolata is a researcher at the University of Zurich, specializing in the intersection of algorithmic systems and social interaction within information management. With an h-index of 16 and 1,321 cumulative citations, his work carries significant weight in the fields of information systems and AI ethics. Co-author G. Schwabe, also affiliated with the University of Zurich, contributes expertise in digital interaction and information systems design, with 210 cumulative citations.

The study is notable for its methodological grounding in a real-world crisis event: on April 12, 2022, following the Brooklyn subway shooting in New York City, ride-hailing platforms Uber and Lyft deployed dynamic pricing algorithms that surged fares by approximately five times the normal rate. The public backlash was immediate and substantial. By systematically analyzing the public discourse that followed—across media, social platforms, and public statements—Dolata and Schwabe constructed a theoretical account of how algorithms participate in the social construction of moral norms.

A note on methodological scope: the study's dataset is primarily drawn from English-language, US-centric urban discourse. Taiwanese enterprises should treat this framework as a conceptual starting point rather than a directly transferable prescription. Cultural differences in how fairness is understood, as well as the distinct labor and regulatory context of Taiwan's platform economy, require localized adaptation of the theoretical model when applying it to AI governance practice.

Algorithms Are Not Neutral Executors—They Are Active Co-Constructors of Fairness

The paper's central theoretical contribution is the concept of "socio-algorithmic construction": the process by which societal notions of fairness are no longer shaped solely by human deliberation, but co-produced through continuous interaction between algorithms and social actors. The Brooklyn surge pricing incident provides a vivid empirical case.

Key Finding 1: Algorithms Set the Moral Agenda Without Being Directly Addressed

The research reveals a counterintuitive dynamic: even when the public discourse did not explicitly engage with the technical logic of the algorithm, the algorithm's behavior—a fivefold fare increase during a mass casualty event—functioned as the trigger and frame for an intense moral debate. The algorithm initiated the exchange, shaped the expectations of participants, and became the vehicle through which different groups expressed solidarity or outrage. In other words, the algorithm was not merely discussed; it was a participant in the construction of what counts as fair behavior by a corporation during a crisis.

Key Finding 2: Social Forces Can and Do Adjust Algorithmic Logic

The study also documents the reverse dynamic: social pressure compelled both Uber and Lyft to issue refunds and publicly commit to suspending dynamic pricing during declared emergencies. This establishes the bidirectional nature of socio-algorithmic construction—algorithms shape social norms, and social norms reshape algorithmic design. For enterprises, this finding is a governance imperative: AI systems, particularly those involved in pricing, resource allocation, or eligibility decisions, must be designed with social feedback loops built in from the outset. Treating fairness as a one-time technical specification is insufficient; it must be understood as a continuous, monitored, and adjustable social commitment.

Implications for Taiwan's AI Governance Practice

The theoretical model advanced by Dolata and Schwabe maps directly onto the emerging regulatory requirements that Taiwanese enterprises face across three key frameworks.

Under ISO 42001, Clause 6.1.2 requires enterprises to conduct ongoing risk assessments for AI systems, explicitly including assessments of differential impacts on distinct stakeholder groups. The study's finding that fairness is dynamically constructed—not statically defined—reinforces why this clause demands continuous, not one-time, evaluation. Enterprises that conduct a single pre-deployment fairness review and consider their obligations fulfilled are misreading both the standard and the social reality.

The EU AI Act, under Article 9, mandates comprehensive risk management systems for high-risk AI applications, with explicit requirements for evaluating discriminatory impacts. Dynamic pricing algorithms used in services that affect access to essential transportation—particularly during emergencies—are strong candidates for high-risk classification. Taiwanese enterprises with EU market exposure must ensure their algorithm documentation, impact assessments, and human oversight mechanisms meet this standard.

Taiwan's own AI Basic Act draft enshrines principles of fairness, transparency, and accountability in AI deployment. The socio-algorithmic framework from this study offers a practical interpretive lens: fairness under the AI Basic Act cannot be satisfied by an algorithm that produces technically optimal outcomes if those outcomes generate sustained public perception of injustice. Enterprises need governance mechanisms that can detect, assess, and respond to such perceptions in real time.

The relevance of Algorithmic Impact Assessments (AIAs) is equally clear. The Brooklyn incident illustrates precisely the kind of societal harm that AIAs are designed to anticipate. Taiwanese enterprises that proactively integrate AIA processes into their AI development lifecycle—rather than waiting for regulatory mandates—will be better positioned to avoid both the reputational damage and the compliance costs that reactive adjustments entail.

How Winners Consulting Services Co. Ltd. Helps Taiwan Enterprises Build Dynamic Fairness Governance

積穗科研股份有限公司(Winners Consulting Services Co. Ltd.)provides structured support for Taiwanese enterprises seeking to translate the insights of this research into actionable governance mechanisms that satisfy ISO 42001, EU AI Act, and Taiwan AI Basic Act requirements.

  1. Algorithmic Impact Assessment Framework Design: We work with enterprises to develop structured AIA processes for algorithms involved in pricing, recommendations, and eligibility decisions. This includes defining fairness indicators relevant to Taiwan's social context, establishing documentation protocols aligned with ISO 42001 Clause 6.1.2, and ensuring audit-readiness for EU AI Act compliance reviews.
  2. Dynamic Fairness Monitoring System Implementation: Moving beyond static pre-deployment reviews, we help enterprises build continuous monitoring systems that track fairness indicators over time, integrate social signal detection (including complaint analysis and media monitoring), and establish escalation protocols for algorithm behavior anomalies—operationalizing the bidirectional socio-algorithmic dynamic described in the research.
  3. ISO 42001 Certification Pathway: We provide end-to-end certification support over a 7 to 12 month engagement, from gap analysis and policy development through internal audit training and third-party certification preparation, ensuring simultaneous alignment with EU AI Act requirements and Taiwan's regulatory landscape.

Winners Consulting Services Co. Ltd. offers a complimentary AI Governance Diagnostic to help Taiwanese enterprises establish an ISO 42001-aligned AI management system within 7 to 12 months.

Explore AI Governance Services → Request Your Free Diagnostic →

Frequently Asked Questions

Does dynamic pricing by platform algorithms count as a high-risk AI system under EU AI Act and ISO 42001?
Dynamic pricing algorithms that affect access to essential services—particularly transportation during emergencies—are strong candidates for high-risk classification under EU AI Act Article 6 and Annex III. Under ISO 42001, the risk level depends on the severity and reversibility of impacts on affected stakeholder groups. The Brooklyn case study in this research demonstrates that a fivefold surge during a mass casualty event produced documented social harm. Taiwanese enterprises operating ride-hailing, food delivery, or logistics platforms should conduct a formal risk classification review, documenting the assessment methodology and outcomes in line with ISO 42001 Clause 6.1.2. Even systems not formally classified as high-risk benefit from structured fairness monitoring given the reputational exposure the research illustrates.
What are the most common compliance gaps Taiwanese enterprises face when implementing ISO 42001's fairness requirements?
Three gaps consistently appear in our diagnostic work with Taiwanese enterprises. First, static assessment thinking: enterprises conduct a single fairness review at deployment but lack the ongoing monitoring mechanisms that ISO 42001's continuous risk assessment requirement demands. Second, context mismatch: enterprises apply Western fairness metrics without localizing them to Taiwan's social and cultural context, producing assessments that are technically compliant but practically incomplete. Third, siloed governance: fairness assessment requires coordinated input from legal, data science, and business operations teams, but most enterprises lack the cross-functional governance structure to operationalize this. EU AI Act adds additional pressure through its requirements for explainable decision-making documentation on high-risk systems, which requires technical preparation capabilities many Taiwanese enterprises are still building.
What does ISO 42001 certification require, and how long does implementation take for a Taiwanese enterprise?
ISO 42001 certification requires enterprises to establish an AI Management System (AIMS) covering: AI governance policy and objectives, risk identification and classification processes (including bias and fairness assessment), stakeholder impact evaluation, internal audit mechanisms, and continual improvement procedures. For Taiwanese enterprises with existing ISO 27001 or ISO 9001 management systems, the implementation timeline is typically 7 to 9 months. Enterprises building from scratch should plan for 10 to 12 months. Key milestones: Months 1–2 for current-state assessment and gap analysis; Months 3–6 for policy documentation and governance structure design; Months 7–9 for internal audit and management review; Months 10–12 for third-party certification audit. Simultaneous alignment with EU AI Act and Taiwan's AI Basic Act requirements can be efficiently achieved within a single ISO 42001 implementation project.
How should Taiwanese enterprises assess the cost-benefit case for AI governance investment?
The financial case for AI governance investment has two dimensions. On the risk side, EU AI Act violations for high-risk AI systems carry penalties of up to €30 million or 6% of global annual revenue—whichever is higher. Reputational costs, as the Brooklyn case illustrates, can be equally significant and harder to quantify. On the competitive side, ISO 42001 certification is becoming a supply chain requirement for EU and US enterprise clients, giving early-movers a measurable advantage in procurement and partnership evaluations. For a mid-sized Taiwanese enterprise (200–500 employees), a structured ISO 42001 implementation with external advisory support typically requires 3 to 6 months of dedicated internal resource alongside consulting fees—an investment that most enterprises recover within 18 to 24 months through risk reduction, enhanced client confidence, and expanded market access.
Why engage Winners Consulting Services Co. Ltd. for AI governance advisory?
Winners Consulting Services Co. Ltd. (積穗科研股份有限公司) is one of Taiwan's few advisory firms combining ISO 42001 implementation expertise, EU AI Act compliance advisory, and Taiwan regulatory interpretation in a single integrated practice. Our team continuously monitors developments from NIST, ISO technical committees, and the EU AI Office, ensuring that our advisory recommendations are current with the latest regulatory and academic developments—including research like the Dolata and Schwabe study reviewed here. Unlike generalist management consultants, our exclusive focus on AI governance means we bring proven methodology, not adapted frameworks. We provide full-cycle support from policy design and risk assessment methodology through internal audit training and certification preparation, helping Taiwanese enterprises achieve ISO 42001 certification within 7 to 12 months while maintaining dual compliance with EU AI Act and Taiwan's AI Basic Act requirements.
---

積穗科研股份有限公司(Winners Consulting Services Co. Ltd.)は、台湾を代表するAIガバナンス専門機関として、2023年に発表された重要な学術研究から重大な洞察を提示します。アルゴリズムが危機時の配車料金を決定するとき、それは単に数字を計算するだけでなく、「公平性」という社会的概念の形成に積極的に参加しているという事実です。この知見は、査読論文に9回引用された研究によって裏付けられており、ISO 42001、EU AI Act、台湾AI基本法のもとでAIガバナンスフレームワークを設計する台湾企業に対して、直接的かつ重要な示唆をもたらします。

論文出典:Towards the Socio-Algorithmic Construction of Fairness: The Case of Automatic Price-Surging in Ride-Hailing(Mateusz Dolata、G. Schwabe、arXiv、2023)
原文リンク:https://doi.org/10.1080/10447318.2023.2210887

Source Paper

Towards the Socio-Algorithmic Construction of Fairness: The Case of Automatic Price-Surging in Ride-Hailing(Mateusz Dolata、G. Schwabe,arXiv,2023)

Read Original Paper →

Was this article helpful?

Share

Related Services & Further Reading

Want to apply these insights to your enterprise?

Get a Free Assessment