Questions & Answers
What is unacceptable risk?▼
Unacceptable risk, a concept from general risk management (e.g., ISO 31000), is given a specific, legally binding definition in the EU Artificial Intelligence Act. It refers to AI applications that pose a fundamental threat to human safety, fundamental rights, and societal values, to an extent that no mitigation measures can render them tolerable. According to Article 5 of the Act, AI systems presenting such risks are completely prohibited. Examples include subliminal manipulation, exploitation of vulnerable groups, government-led social scoring, and real-time remote biometric identification in public spaces. In a risk management hierarchy, this represents the highest tier of risk, distinct from 'high-risk' AI systems. While high-risk systems are permissible after meeting strict compliance requirements, unacceptable-risk systems are considered an absolute red line, and their development and deployment are banned outright.
How is unacceptable risk applied in enterprise risk management?▼
Enterprises must adopt a zero-tolerance policy for unacceptable risk AI applications and integrate it into their AI governance framework. Key steps include: 1. AI Inventory and Screening: Create a comprehensive inventory of all AI systems and screen them against the prohibited list in Article 5 of the EU AI Act. For instance, assess whether marketing tools employ subliminal techniques. 2. Confirmation and Cessation: If a potential unacceptable risk is identified, it must be thoroughly analyzed by a committee of legal, technical, and ethics experts. If confirmed, a formal termination process must be initiated immediately to halt the system's development, deployment, and sale, with all decisions fully documented. 3. Preventive Governance: Implement standards like ISO/IEC 42001 (AI Management System) to embed the prohibition list into mandatory ethical reviews during product development and provide regular training for R&D teams. This process ensures a 100% compliance rate, avoiding potential fines of up to €35 million or 7% of global turnover and successfully passing AI ethics audits.
What challenges do Taiwan enterprises face when implementing unacceptable risk?▼
Taiwanese enterprises face three main challenges in addressing unacceptable AI risk. First, a lack of awareness of regulatory applicability, underestimating the extraterritorial reach ('long-arm jurisdiction') of the EU AI Act, which applies if their products or services enter the EU market. Second, a shortage of internal expertise to interpret ambiguous legal terms, such as the technical definition of 'subliminal manipulation.' Third, the compliance burden of legacy technical assets, as existing AI models may have been trained on data now considered prohibited (e.g., scraped facial images from the internet), making audits and replacements costly. Recommended solutions include: 1. Priority Action: Immediately conduct an 'EU AI Act Applicability and Impact Assessment' to clarify legal exposure (Timeline: 1 month). 2. Governance Setup: Establish a cross-functional AI Ethics and Governance Committee and adopt the NIST AI Risk Management Framework (AI RMF 100-1) for structured assessments and training (Timeline: 3 months). 3. Technical Audit: Launch an 'AI Asset Inventory and Compliance Audit Program,' prioritizing high-impact systems and developing a clear data governance and model retirement plan (Timeline: 6-12 months).
Why choose Winners Consulting for unacceptable risk?▼
Winners Consulting specializes in unacceptable risk for Taiwan enterprises, delivering compliant management systems within 90 days. Free consultation: https://winners.com.tw/contact
Related Services
Need help with compliance implementation?
Request Free Assessment