Winners Consulting Services Co. Ltd. (積穗科研股份有限公司), Taiwan's expert in AI Governance, highlights a critical finding for enterprise leaders: a landmark 2024 study, already cited 68 times in the academic community, reveals that none of the four dominant cybersecurity GRC frameworks—NIST CSF 2.0, COBIT 2019, ISO 27001:2022, or ISO 42001:2023—are fully equipped to manage the unique risks of commercializing Large Language Models (LLMs). Of the four, ISO 42001:2023 emerges as the most comprehensive AI governance standard, while COBIT 2019 aligns most closely with the EU AI Act. For Taiwanese enterprises navigating the intersection of EU AI Act compliance, ISO 42001 certification, and Taiwan's own AI Fundamental Act, this research provides both an urgent warning and a clear direction forward.
Paper Citation: From COBIT to ISO 42001: Evaluating cybersecurity frameworks for opportunities, risks, and regulatory compliance in commercializing large language models (Timothy R. McIntosh, Teo Sušnjak, Tong Liu, OpenAlex — AI Governance, 2024)
Original Paper: https://doi.org/10.1016/j.cose.2024.103964
About the Authors and This Research
This paper was co-authored by three researchers affiliated with Massey University in New Zealand, an institution recognized for its interdisciplinary work at the intersection of AI security, information systems governance, and machine learning. Timothy R. McIntosh, the lead author, holds an h-index of 18 with 966 cumulative citations, and has built a focused research agenda around LLM safety, trustworthiness, and regulatory compliance. Teo Sušnjak, the second author, is the most academically influential contributor to this paper, with an h-index of 27 and 2,880 cumulative citations, establishing him as a high-impact researcher in machine learning risk and AI systems evaluation. Tong Liu contributes expertise spanning AI governance and cybersecurity policy.
The paper was published in 2024 in a peer-reviewed journal indexed under AI Governance (OpenAlex), and has since accumulated 68 citations, including 4 high-impact citations—a notably rapid citation velocity for a governance-focused study, indicating that it has quickly become a foundational reference within the AI governance academic community. The research methodology combines qualitative content analysis with a dual-validation process involving both large language models and human domain experts, creating a methodologically rigorous and self-referential research design that strengthens the credibility of its findings.
The Hidden Governance Gap No Enterprise Can Afford to Ignore
The central question driving this research is deceptively simple: Are today's dominant cybersecurity governance frameworks actually ready for the age of Large Language Models? The research team systematically evaluated four frameworks across four dimensions—integration readiness, opportunity facilitation, risk oversight, and regulatory compliance—using comparative gap analysis. The results are both clarifying and alarming for enterprise decision-makers.
Finding One: ISO 42001:2023 Leads as the Most Comprehensive AI Governance Framework
Among the four frameworks evaluated, ISO 42001:2023 stands alone as the only international standard specifically designed for Artificial Intelligence Management Systems (AIMS). The research found it to be the most comprehensive in facilitating LLM integration opportunities, addressing issues that other frameworks simply were not designed to handle: algorithmic transparency, bias and fairness risk, data governance for training pipelines, ongoing monitoring of AI model behavior, and accountability structures for AI-generated outputs. For Taiwanese enterprises seeking internationally recognized proof of AI governance maturity—whether for supply chain requirements, investor due diligence, or regulatory demonstration—ISO 42001 certification represents the most credible available benchmark. Notably, ISO 42001 aligns strongly with the principles embedded in Taiwan's AI Fundamental Act, which mandates risk-based AI management, transparency, and human oversight mechanisms.
Finding Two: COBIT 2019 Aligns Most Closely with EU AI Act, Yet Gaps Remain
In the regulatory compliance dimension, COBIT 2019 demonstrated the strongest alignment with the EU AI Act among the four frameworks evaluated. This alignment is logical: COBIT's emphasis on governance accountability, risk categorization, and audit traceability mirrors the EU AI Act's regulatory philosophy of proportionate risk management. However, the research is explicit that even COBIT 2019 falls short of fully addressing the multifaceted risks specific to LLM deployment—including hallucination outputs, prompt injection attacks, training data contamination, and model drift. For Taiwanese enterprises that export to EU markets or operate within EU-adjacent supply chains, relying on COBIT alone as an EU AI Act compliance strategy will leave material regulatory gaps. A layered approach combining COBIT's governance accountability structures with ISO 42001's AI-specific risk controls is the research-supported path forward.
Finding Three: Human-Expert-in-the-Loop Validation Is a Universal Requirement
Perhaps the most operationally significant finding of this research is its consistent, cross-framework recommendation: regardless of which framework an enterprise adopts, integrating human-expert-in-the-loop validation processes is essential for any credible AI governance architecture. This principle is not merely a technical safeguard—it reflects a governance philosophy that AI systems, particularly LLMs operating in high-stakes contexts, must not automate consequential decisions without structured human accountability checkpoints. This finding directly mirrors requirements articulated in both the EU AI Act's provisions on human oversight for high-risk AI systems and Taiwan's AI Fundamental Act's emphasis on human-centered AI development.
What This Research Means for Taiwanese Enterprise Leaders
The implications of this research for Taiwan-based enterprises extend well beyond academic interest. Taiwanese companies are currently navigating a convergence of three distinct regulatory pressures, and this paper provides the analytical scaffolding needed to respond strategically rather than reactively.
The EU AI Act's Extraterritorial Reach: The EU AI Act, which entered into force in 2024, applies to any enterprise placing AI-enabled products or services on the EU market, regardless of where the company is headquartered. For Taiwan's export-oriented technology, manufacturing, and financial services sectors, this creates immediate compliance obligations. The Act's four-tier risk classification—unacceptable risk, high risk, limited risk, and minimal risk—requires enterprises to maintain documented AI risk registers, technical documentation, conformity assessments, and human oversight mechanisms for high-risk applications. The research finding that no current framework fully addresses LLM risks means that Taiwanese enterprises cannot simply adopt an existing framework and declare compliance; active gap analysis and framework enhancement are required.
ISO 42001 as a Market Entry Requirement: Beyond regulatory compliance, ISO 42001 certification is rapidly becoming a de facto market access criterion in global supply chains. Government procurement processes in multiple jurisdictions and enterprise vendor qualification programs are beginning to require demonstrable AI governance maturity. For Taiwanese manufacturers and technology service providers integrated into global value chains, ISO 42001 certification is transitioning from a "nice to have" to a strategic necessity. The research's finding that ISO 42001 provides the most comprehensive foundation for LLM governance reinforces the urgency of pursuing certification proactively.
Taiwan's AI Fundamental Act and Domestic Compliance: Taiwan's AI Fundamental Act establishes foundational principles for AI development and application within Taiwan, emphasizing risk-based management, algorithmic transparency, human oversight, and accountability. The Act's principles are closely aligned with both ISO 42001's design philosophy and the EU AI Act's governance requirements, creating an opportunity for Taiwanese enterprises to pursue an integrated compliance architecture that satisfies all three regulatory frameworks simultaneously—rather than building separate compliance programs for each.
How Winners Consulting Services Helps Taiwanese Enterprises Build AI Governance That Works
Winners Consulting Services Co. Ltd. (積穗科研股份有限公司) partners with Taiwanese enterprises to design, implement, and certify AI governance systems that satisfy the convergent requirements of ISO 42001, the EU AI Act, and Taiwan's AI Fundamental Act. Our approach is grounded in the research-validated insight that no single framework is sufficient on its own, and that human-expert oversight must be embedded into the governance architecture from day one.
- AI Governance Framework Gap Analysis Against ISO 42001: We conduct a systematic assessment of your current AI governance posture, mapping existing controls against the specific requirements of ISO 42001:2023. For enterprises deploying LLMs in customer-facing applications, automated decision systems, or generative content pipelines, we perform a dedicated LLM risk assessment covering hallucination risk, data privacy compliance, prompt injection vulnerability, and human oversight adequacy. This diagnostic phase is designed to be completed within 30 days and produces a prioritized gap remediation roadmap aligned with both EU AI Act risk tiers and Taiwan's AI Fundamental Act requirements.
- AI Risk Classification Matrix Aligned with EU AI Act's Four-Tier Structure: We help you build an enterprise AI Risk Register that classifies all current and planned AI applications according to the EU AI Act's four-tier risk taxonomy—unacceptable, high, limited, and minimal risk. Each classification entry is documented with the applicable regulatory obligations, required controls, and responsible ownership. This register becomes the operational backbone of your ISO 42001 management system and the primary artifact for demonstrating compliance to both EU regulators and Taiwanese regulatory authorities under the AI Fundamental Act.
- Human-in-the-Loop Decision Gate Design and Implementation: For all high-risk AI applications identified in your risk register, we design and implement mandatory human review checkpoints within your operational workflows. These decision gates ensure that AI system outputs—particularly those from LLMs—are subject to structured human expert validation before influencing consequential business decisions. This is not only the most consistently recommended enhancement across all four frameworks evaluated in the research, but also a direct requirement under both the EU AI Act's human oversight provisions and the principles of Taiwan's AI Fundamental Act.
Winners Consulting Services Co. Ltd. offers a free AI Governance Mechanism Diagnostic, helping Taiwanese enterprises establish an ISO 42001-compliant AI management system within 90 days.
Apply for Free AI Governance Diagnostic →Frequently Asked Questions
- Is ISO 27001 sufficient for managing LLM risks in our enterprise?
- No. The 2024 research by McIntosh et al. is explicit that ISO 27001:2022, while an excellent information security management framework, contains fundamental gaps in LLM risk oversight. ISO 27001 was designed around the CIA triad—confidentiality, integrity, and availability of information assets—and does not address AI-specific risks such as hallucination outputs, algorithmic bias, training data contamination, or the governance of autonomous decision-making. For enterprises deploying Large Language Models, ISO 42001:2023 is the appropriate primary framework, with ISO 27001 continuing to serve its intended purpose as the information security layer. Winners Consulting Services can help you design a dual-framework architecture where both standards operate in a complementary, non-duplicative manner.
Was this article helpful?
Related Services & Further Reading
Want to apply these insights to your enterprise?
Get a Free Assessment