Winners Consulting Services Co. Ltd. (積穗科研股份有限公司), Taiwan's expert in AI Governance, identifies a critical inflection point for enterprise leaders: the emergence of Embodied AI systems operating autonomously in high-stakes medical environments is no longer a future scenario—it is a present governance challenge that demands immediate action under ISO 42001, the EU AI Act, and Taiwan's AI Governance Framework. A landmark 2025 study published on arXiv demonstrates precisely how autonomous AI agents—in this case, UAVs managing real-time medical IoT edge computing—create exactly the kind of self-directed, high-impact decision loops that regulators worldwide are racing to govern.
Paper Citation: Embodied AI-Enhanced IoMT Edge Computing: UAV Trajectory Optimization and Task Offloading with Mobility Prediction (Siqi Mu, Shuo Wen, Yang Lu, arXiv — AI Governance & Ethics, 2025)
Original Paper: http://arxiv.org/abs/2512.20902v1
About the Authors and This Research
This paper is co-authored by three researchers whose combined expertise spans wireless communications, machine learning, and medical IoT systems. Siqi Mu holds an h-index of 3 with 51 cumulative citations—a rapidly rising voice in this emerging interdisciplinary domain. Co-author Shuo Wen brings substantially greater academic weight to the collaboration, with an h-index of 12 and 482 cumulative citations, reflecting an established track record in edge computing and wireless communication systems that has earned recognition from international journals and conferences alike. Yang Lu rounds out the team with focused contributions in IoMT and AI system integration.
Critically, this paper is classified under arXiv's AI Governance & Ethics category—a deliberate editorial choice that signals the academic community's recognition that Embodied AI systems are not merely engineering problems. They are governance problems. This classification reinforces the view held by Winners Consulting Services Co. Ltd. that technical AI deployments and governance frameworks must be co-designed, not treated as sequential afterthoughts.
Autonomous AI in Medical Environments: The Research That Changes the Governance Conversation
The research addresses one of the most technically and ethically complex challenges in modern AI deployment: how should an autonomous AI agent—specifically, a UAV serving as an Embodied AI platform—allocate computing resources, plan flight trajectories, and prioritize medical task completion in real time, while operating under strict energy constraints and serving users with dynamically changing task criticality levels?
Core Finding 1: Hierarchical Multi-Scale Transformer Achieves Superior Mobility Prediction
The research team developed a novel Hierarchical Multi-Scale Transformer-based user trajectory prediction model. By leveraging the UAV as an Embodied AI agent that captures historical movement traces of Wireless Body Area Network (WBAN) users—patients wearing medical monitoring devices—the model achieves prediction accuracy that significantly outperforms existing benchmark approaches. From an AI governance perspective, this finding is directly relevant to ISO 42001's requirement for organizations to assess the reliability and uncertainty bounds of AI system outputs. A trajectory prediction model that informs life-critical medical task allocation must be subject to rigorous accuracy monitoring, version control, and failure mode documentation—all of which are core elements of an ISO 42001-compliant AI Management System.
Core Finding 2: Prediction-Enhanced Deep Reinforcement Learning Optimizes Dual Objectives Under Real-World Constraints
Building on the prediction model, the team designed a Prediction-Enhanced Deep Reinforcement Learning (DRL) algorithm that simultaneously minimizes weighted average task completion time across all WBAN users while respecting UAV energy consumption constraints. Validation using real-world movement traces—not merely simulated data—demonstrates the superiority of this approach over existing methods across multiple performance benchmarks. For AI governance practitioners, this dual-objective optimization represents a textbook case of what EU AI Act Article 9 (Risk Management System) requires organizations to document: when an AI system must balance competing objectives with real-world consequences, the logic governing those trade-offs must be transparent, auditable, and subject to human oversight mechanisms.
Core Finding 3: Embodied AI as the Bridge Between Physical Environment and Digital Decision-Making
Perhaps the most governance-significant contribution of this research is its framing of the UAV not merely as a hardware platform, but as an Embodied AI Agent—a system that perceives, learns from, and acts upon its physical environment in a closed loop. This architecture, where the AI agent both collects the data used to train its prediction models and executes the decisions those models inform, creates what governance experts call an "autonomous feedback loop." EU AI Act Article 14 specifically addresses the human oversight requirements for AI systems with such characteristics, requiring that humans retain the ability to understand, monitor, and intervene in AI decision-making processes at meaningful points in the operational cycle.
Implications for Taiwan's AI Governance Practice: Three Frameworks That Apply Right Now
This research has direct, actionable implications for Taiwan enterprises operating in—or planning to enter—the medical IoT, edge computing, or autonomous systems markets. Winners Consulting Services Co. Ltd. analyzes these implications through three regulatory lenses that every Taiwan enterprise leader should understand.
ISO 42001 (AI Management Systems Standard, published 2023): This international standard requires organizations to establish systematic processes for identifying, assessing, and managing AI risks. The Embodied AI system described in this research exemplifies the kind of "AI system with autonomous operation capability" that ISO 42001 Annex A specifically flags as requiring enhanced risk controls. Taiwan enterprises adopting similar architectures—whether in healthcare, logistics, or smart manufacturing—must establish documented risk assessment procedures, define acceptable performance boundaries, and implement ongoing monitoring mechanisms as part of their ISO 42001-compliant management system.
EU AI Act (entered into force August 2024, full enforcement from August 2026): Article 6 of the EU AI Act classifies AI systems used as medical devices or in critical medical infrastructure as High-Risk AI systems. This classification triggers a comprehensive set of obligations including technical documentation requirements, conformity assessment procedures, CE marking, post-market surveillance, and mandatory registration in the EU AI database. Taiwan companies exporting AI-enabled medical products to Europe must begin compliance preparation no later than the end of 2025 to meet the August 2026 enforcement deadline. The maximum penalty for non-compliance reaches 3% of global annual turnover, or €15 million, whichever is higher.
Taiwan AI Fundamental Act (推動中, 2024): Taiwan's evolving AI governance legislation adopts a human-centric approach to AI regulation, requiring high-risk AI applications to undergo impact assessments and establish appeal and remedy mechanisms. Autonomous AI systems operating in medical environments—precisely the use case explored in this research—represent the highest priority category for regulation under this emerging framework. Taiwan enterprises should proactively align their AI governance practices with the principles of this legislation to avoid reactive compliance challenges as the regulatory landscape crystallizes.
How Winners Consulting Services Co. Ltd. Helps Taiwan Enterprises Govern Embodied AI
Winners Consulting Services Co. Ltd. (積穗科研股份有限公司) helps Taiwan enterprises build AI Management Systems compliant with ISO 42001 and EU AI Act requirements, conduct AI risk classification assessments, and ensure AI applications align with Taiwan's AI Governance principles. Based on the governance insights revealed by this research, we recommend three specific actions for enterprise leaders:
- Conduct an AI Risk Classification Audit Within 30 Days: Map all existing and planned AI deployments against ISO 42001's risk framework, specifically identifying any systems with autonomous decision-making capabilities or real-world physical impacts. For each such system, document the decision logic, failure modes, and human oversight mechanisms currently in place. Winners Consulting provides a standardized AI Risk Classification toolkit that enables enterprises to complete this initial audit within 30 days.
- Establish AI Transparency Documentation Standards for All Predictive-Decisional AI Systems: In direct response to EU AI Act Article 13 transparency requirements, enterprises must require AI vendors and internal development teams to provide algorithm decision logic documentation, training data disclosure, performance boundary specifications, and uncertainty quantification for any AI system that combines prediction with consequential decision-making. Winners Consulting offers AI procurement standards templates and vendor due diligence checklists tailored to Taiwan's regulatory environment.
- Build an ISO 42001-Compliant AI Management System Framework Within 90 Days: This includes establishing an AI governance policy, implementing a risk assessment procedure, defining incident response protocols for AI system failures, and scheduling regular management reviews. Winners Consulting's consulting team has guided multiple Taiwan enterprises through ISO 42001 gap analyses and implementation planning, delivering customized roadmaps that achieve audit-ready governance foundations within 90 days.
Winners Consulting Services Co. Ltd. offers a complimentary AI Governance Diagnostic to help Taiwan enterprises establish an ISO 42001-compliant management framework within 90 days.
Request Your Free Governance Diagnostic →Frequently Asked Questions
- What AI governance assessments are required when an enterprise deploys autonomous drone or edge AI systems?
- Enterprises must immediately conduct an AI Impact Assessment—a joint requirement of ISO 42001 and the EU AI Act. The assessment must address four key dimensions: the scope of autonomous decision-making (can the system execute consequential decisions without human intervention?); data privacy compliance (especially for sensitive medical data processed by WBAN devices); risk level of system failure scenarios (including worst-case medical outcomes); and the adequacy of Human-in-the-Loop mechanisms. Under EU AI Act Article 6, AI systems used in medical device contexts are automatically classified as High-Risk AI, triggering a full conformity assessment requirement. Taiwan's AI Fundamental Act similarly requires high-risk AI applications to establish impact assessment and remedy mechanisms before deployment.
- What specific EU AI Act requirements apply to Taiwan companies exporting AI medical products to Europe?
- Taiwan companies exporting AI-enabled medical products to Europe face a five-step compliance journey under the EU AI Act, which entered into force in August 2024 with full High-Risk AI enforcement beginning August 2026. Step one: confirm High-Risk AI classification under Annex III. Step two: prepare technical documentation including algorithm descriptions, training data records, and testing reports. Step three: complete conformity assessment (third-party Notified Body review required for certain medical AI systems). Step four: apply for CE marking. Step five: establish post-market surveillance and incident reporting systems. Companies should initiate this process no later than Q4 2025 to meet the 2026 deadline. Non-compliance penalties reach 3% of global annual turnover.
- What are the practical benefits of ISO 42001 certification for Taiwan enterprises?
- ISO 42001, published in 2023 as the world's first international standard for AI Management Systems, delivers three concrete benefits for Taiwan enterprises. First, regulatory risk reduction: the framework directly maps to EU AI Act and Taiwan AI Fundamental Act requirements, helping enterprises avoid penalties that can reach 3% of global annual turnover under EU AI Act enforcement. Second, market access advantage: European and North American enterprise customers increasingly require ISO 42001 certification as a supplier qualification criterion, particularly for AI-enabled products in healthcare, finance, and critical infrastructure. Third, internal governance improvement: the standard establishes systematic AI risk management processes that improve decision quality, accountability, and organizational resilience. Winners Consulting provides end-to-end support from gap analysis through certification, ensuring efficient achievement of certification objectives.
- How long does it take to build an ISO 42001-compliant AI Management System, and what are the steps?
- Timeline depends on organizational size and existing management system maturity. Enterprises with existing ISO 27001 or ISO 9001 certifications typically require 3 to 6 months. Organizations starting from scratch should plan for 6 to 12 months. Winners Consulting's standard implementation follows four phases: Phase 1 (Days 1–30): current state assessment and gap analysis against ISO 42001 requirements;
Was this article helpful?
Related Services & Further Reading
Want to apply these insights to your enterprise?
Get a Free Assessment