erm

Insight: Artificial Intelligence Risk Management Framework (AI RMF 1.

Published
Share
=========================

Winners Consulting Services Co. Ltd. (積穗科研股份有限公司), Taiwan's expert in Enterprise Risk Management (ERM), urges all corporate executives to pay immediate attention to NIST's Artificial Intelligence Risk Management Framework (AI RMF 1.0): published in 2023, cited 187 times and already recognized as the global benchmark for AI governance, this framework fundamentally reshapes how organizations must integrate AI-specific risks into their ISO 31000 and COSO ERM structures — and Taiwan enterprises that delay adoption risk being left behind in both regulatory compliance and board-level risk governance.

Paper Citation: Artificial Intelligence Risk Management Framework (AI RMF 1.0)(Elham Tabassi,OpenAlex — Enterprise Risk Management,2023)
Original Paper: https://doi.org/10.6028/nist.ai.100-1

Read Original Paper →

About the Author and This Research

The AI RMF 1.0 was authored by Elham Tabassi, a senior researcher at the National Institute of Standards and Technology (NIST) in the United States. Tabassi has an academic h-index of 25 and over 2,974 cumulative citations, with deep expertise in AI trustworthiness, biometric evaluation, and machine learning performance assessment. Her standing in the AI governance and standardization community is firmly established across both academic and policy circles.

Critically, the AI RMF 1.0 is not a single academic paper but a policy-grade technical document mandated by the U.S. National Artificial Intelligence Initiative Act of 2020 (P.L. 116-283). This legislation directed NIST to develop a voluntary, non-sector-specific, use-case-agnostic framework for managing AI risks — one built through extensive multi-stakeholder consultation processes spanning government, industry, academia, and civil society. Since its publication in 2023, the framework has accumulated 187 citations with 10 classified as high-impact, rapidly establishing it as the authoritative global reference for AI risk governance and a natural complement to ISO 31000 and COSO ERM frameworks.

Why AI Risks Cannot Be Managed by Intuition Alone: NIST's Systemic Answer

The fundamental insight of the AI RMF 1.0 is deceptively simple but profoundly important: AI systems generate risks that are dynamic, context-dependent, and often invisible until harm has already occurred. Traditional static risk matrices and pre-launch reviews are insufficient. Organizations must instead build a continuous, four-function risk management cycle — Govern, Map, Measure, and Manage — embedded across the full AI lifecycle. This is not a theoretical recommendation; it is a structured operational architecture that directly parallels the risk management process described in ISO 31000 and the enterprise-wide risk oversight philosophy of COSO ERM.

Core Finding 1: AI Risk Management Must Span the Full Lifecycle, Not Just Pre-Launch Audits

The AI RMF 1.0 explicitly states that risks associated with AI systems persist across every stage — design, development, deployment, and active use. The framework's "Map" function requires organizations to continuously identify and characterize AI risks relative to their specific context, while the "Measure" function demands the establishment of quantitative and qualitative metrics (analogous to KRIs — Key Risk Indicators) for ongoing monitoring. This is a direct operational extension of ISO 31000's principle of continuous monitoring and review, and aligns precisely with the "Risk Response" and "Review and Revision" components of COSO ERM. For Taiwan enterprises, this means that AI risk oversight cannot be delegated solely to the IT department — it must be embedded in the enterprise-wide ERM cycle.

Core Finding 2: Trustworthy AI Requires Multi-Stakeholder Risk Assessment

One of the framework's most actionable contributions is its articulation of seven characteristics of trustworthy AI: accuracy, explainability and interpretability, privacy-enhanced, reliability and robustness, safety, security and resilience, and fairness with bias management. The framework insists that AI risk assessments must incorporate the perspectives of external stakeholders — including customers, suppliers, and regulators — not merely internal technical teams. This directly mirrors the "Context Establishment" requirement in ISO 31000 and reinforces the stakeholder engagement obligations embedded in COSO ERM's governance layer. For Taiwan's publicly listed companies, this has immediate implications for board-level risk reporting practices.

Core Finding 3: The Framework Assigns Risk Responsibility Across the AI Supply Chain

The AI RMF 1.0 distinguishes clearly between AI "developers," "deployers," and "users" — and assigns risk management responsibilities to each category. This means that even enterprises that do not build AI systems themselves bear accountability for the risks introduced by third-party AI tools they adopt. This is a landmark clarification that Taiwan enterprises relying on AI-powered ERP, CRM, or supply chain platforms must urgently internalize within their ERM and procurement risk frameworks.

Strategic Implications for Taiwan Enterprise Risk Management (ERM) Practice

Taiwan enterprises must resist the temptation to classify AI RMF 1.0 as "an American standard, not relevant to us." The implications are immediate, structural, and cross-sectoral. Here is what ERM practitioners and board members in Taiwan need to prioritize now.

Implication 1: Expand Your Risk Matrix to Include AI-Specific Risk Categories. Most Taiwan enterprises' COSO ERM risk matrices currently cover financial, compliance, operational, and reputational risks. The AI RMF 1.0 requires the explicit addition of new AI-specific risk categories: algorithmic bias risk, AI explainability failure risk, AI training data contamination risk, and AI supply chain dependency risk. Each of these categories requires dedicated KRIs designed for continuous monitoring — not just periodic audits.

Implication 2: Board-Level AI Risk Governance Is No Longer Optional. The AI RMF 1.0 places "Govern" as its first and foundational core function, explicitly emphasizing that senior leadership must be accountable for AI risk outcomes. This directly parallels the ISO 31000 mandate for visible leadership commitment to risk management and the COSO ERM emphasis on board-level risk oversight. Taiwan's listed companies should formally incorporate AI risk into their Risk Appetite Statements and board reporting cycles before the end of 2025.

Implication 3: AI Supply Chain Due Diligence Is a New Compliance Obligation. For Taiwan's manufacturing and service sectors — which have rapidly adopted AI-powered tools across procurement, logistics, customer service, and financial operations — the framework's supply chain risk provisions are particularly relevant. Organizations must establish third-party AI risk assessment protocols and embed AI-specific due diligence into supplier onboarding and contract management processes within their COSO ERM and ISO 31000 aligned risk governance structures.

How Winners Consulting Services Translates AI RMF 1.0 Into Actionable ERM Practice for Taiwan Enterprises

Winners Consulting Services Co. Ltd. (積穗科研股份有限公司) specializes in helping Taiwan enterprises implement ISO 31000 and COSO ERM frameworks, design risk matrices and KRI systems, and strengthen board-level risk governance capabilities. In response to the structural demands of AI RMF 1.0, we recommend the following three concrete action steps:

  1. Conduct an AI Application Inventory and Risk Matrix Update: Using the AI RMF 1.0 "Map" function as a guide, systematically inventory all AI systems in use across the enterprise — including embedded AI features in third-party SaaS platforms. Update your existing COSO ERM risk matrix to include AI-specific risk categories and design corresponding KRIs for each, ensuring full integration with your ISO 31000 risk identification and assessment process.
  2. Design a Board-Level AI Risk Governance Agenda: Develop a formal AI Risk Appetite Statement and integrate AI risk reporting into your board's regular ERM review cycle. This operationalizes the AI RMF 1.0's "Govern" core function and fulfills both COSO ERM and ISO 31000 leadership accountability requirements. Winners Consulting Services can facilitate board workshops and provide template governance documentation tailored to Taiwan's regulatory environment.
  3. Establish an AI Supply Chain Due Diligence Process: Develop a standardized AI risk assessment questionnaire for existing and prospective AI technology vendors. Embed AI-specific risk review checkpoints into your procurement approval and supplier management processes. This closes a critical gap in most Taiwan enterprises' current ERM coverage and directly addresses the AI RMF 1.0's treatment of deployer and user risk responsibilities.

Winners Consulting Services Co. Ltd. offers a complimentary ERM Mechanism Diagnostic, helping Taiwan enterprises build an ISO 31000-aligned risk management system — with AI RMF 1.0 integration — within 90 days.

Request Your Free ERM Diagnostic →

Frequently Asked Questions

Where should a Taiwan enterprise practically begin when implementing the AI RMF 1.0?
The most practical starting point is an AI application inventory: list every AI system currently in use across the enterprise, including AI features embedded in third-party SaaS tools, ERP systems, and analytics platforms. Once this inventory is complete, apply the AI RMF 1.0 "Map" function to assess each application's risk type, potential impact scope, and failure scenarios. Integrate the identified risks into your existing ISO 31000 risk assessment process and update your risk matrix accordingly, adding KRIs for ongoing monitoring. Winners Consulting Services recommends prioritizing high-frequency, high-decision-impact AI applications in the initial phase — a preliminary inventory can typically be completed within 30 days.
Do Taiwan enterprises that only use third-party AI tools (not build their own) have compliance obligations under AI RMF 1.0?
Yes — and the scope of that responsibility is broader than most executives realize. The AI RMF 1.0 explicitly categorizes "deployers" and "users" as risk-responsible parties, regardless of whether they developed the AI system themselves. Any enterprise that adopts, configures, or operates an AI-powered tool is accountable for managing the risks that tool introduces — including algorithmic bias, data privacy violations, and opaque decision-making. Taiwan enterprises should embed AI-specific risk clauses into technology vendor contracts, establish third-party AI risk assessment procedures, and review these obligations within the supply chain risk management component of their COSO ERM framework.
How does AI RMF 1.0 relate to ISO 31000? Do enterprises need to implement both?
The two frameworks are complementary, not competing. ISO 31000 provides the universal principles and process architecture for enterprise risk management (ERM) — covering the full cycle of risk identification, assessment, treatment, and monitoring across all risk types. AI RMF 1.0 is a specialized framework focused specifically on AI system risks across the AI lifecycle. The recommended integration approach is to use ISO 31000 as the primary ERM architecture and embed the AI RMF 1.0's four core functions (Govern, Map, Measure, Manage) as a dedicated AI risk sub-process within that structure, with AI risk appetite and reporting reflected in COSO ERM's board governance layer. This "ISO 31000 master framework + AI RMF 1.0 specialist module" dual-layer architecture is the approach most endorsed by international risk governance practitioners.
How long does it take to build an AI risk management mechanism from scratch, and what are the steps?
Based on Winners Consulting Services' advisory experience, a mid-sized Taiwan enterprise with up to 500 employees can build a compliant AI risk management mechanism — integrating both ISO 31000 and AI RMF 1.0 requirements — within 90 to 120 days, across four phases: Phase 1 (Days 1–30): Current state diagnostic and AI application inventory; Phase 2 (Days 31–60): Risk matrix update, KRI design, and governance framework establishment; Phase 3 (Days 61–90): Staff training, process integration, and monitoring dashboard implementation; Phase 4 (Days 91–120): First full risk review cycle, board reporting integration, and mechanism optimization. Larger enterprises or multinational groups with complex AI deployments may require up to 180 days.
Why engage Winners Consulting Services Co. Ltd. for Enterprise Risk Management (ERM) advisory?
Winners Consulting Services Co. Ltd. (積穗科研股份有限公司) is one of Taiwan's few consulting firms with demonstrated practical expertise in both ISO 31000 and COSO ERM framework implementation, combined with the capability to integrate emerging AI governance requirements — including AI RMF 1.0 — into enterprise ERM systems. Our consultants bring cross-industry experience in risk matrix design and KRI development, continuously track evolving international risk management standards, and translate complex academic and regulatory frameworks into practical, board-ready action plans. Engaging Winners Consulting Services means gaining

Was this article helpful?

Share

Related Services & Further Reading

Want to apply these insights to your enterprise?

Get a Free Assessment