ai

Can AI Tools Replace Legal Experts for EU AI Act Technical Documentation? ISO 42001 Compliance Insights

Published
Share

A new research analysis from Winners Consulting Services Co., Ltd. highlights a critical challenge: while the EU AI Act requires high-risk AI systems to have complete technical documentation, small and medium-sized enterprises (SMEs) often lack dual expertise in law and technology. A 2025 experimental study published on arXiv provides the first quantitative evidence that AI-assisted tools can offer a "statistically significant moderate correlation" in helping developers identify initial documentation gaps. However, these tools cannot yet replace human expert judgment, presenting both an opportunity and a warning for Taiwanese companies pursuing ISO 42001 certification.

Source Paper: Simplifying software compliance: AI technologies in drafting technical documentation for the AI Act (Francesco Sovrano, Emmie Hine, Stefano Anzolut, arXiv, 2025)
Original Link: https://doi.org/10.1007/s10664-025-10645-x

Read Original Paper →

About the Authors and This Study

This paper was co-authored by three researchers: Francesco Sovrano and Emmie Hine, both from the European academic community, focus on the intersection of AI ethics, legal compliance, and explainability. Emmie Hine has 11 academic citations and holds a degree of visibility in AI ethics policy circles. Co-author Stefano Anzolut contributes a practical perspective from the legal tech field, specifically on compliance tool development.

Notably, since its publication in 2025, this study has been cited 12 times, with one citation identified as high-impact, indicating sustained and deep academic interest in whether AI tools can assist in drafting regulatory documents. The study itself uses an open-source high-risk AI system as a real-world case and involves legal experts for parallel evaluation, lending considerable practical credibility to its research design.

The core question of this research is highly pragmatic: under the EU AI Act's technical documentation requirements, can developers leverage ChatGPT 3.5, ChatGPT 4, and a specialized compliance tool, DoXpert, to identify documentation gaps and reduce compliance costs?

AI-Assisted Technical Documentation Compliance: Quantifying Partial Feasibility

The study's main conclusion is "partially feasible, but with significant limitations." Using the Rank Biserial Correlation statistical method, the research team compared the assessments generated by the tools with the opinions of legal experts. They found that DoXpert's judgments showed a moderate and statistically significant correlation with expert opinions, whereas ChatGPT 3.5 and ChatGPT 4 revealed several critical issues.

Key Finding 1: The "Overconfidence" Risk of ChatGPT

The research clearly indicates that ChatGPT (both 3.5 and 4 versions) tends to be overconfident when assessing the compliance of technical documentation—making definitive judgments without sufficient legal basis. This is particularly dangerous in the context of EU AI Act compliance. If a company relies on ChatGPT's output to deem its documentation complete but is actually facing an issue of superficial compliance, it could face enormous legal risks during regulatory audits. The European Commission's latest draft guidance on AI regulation explicitly requires that technical documentation for high-risk AI systems be verifiable, not just generated by a tool.

Key Finding 2: Specialized Compliance Tools Offer Statistically Significant Reference Value

Compared to general-purpose large language models, the DoXpert tool, designed specifically for the articles of the EU AI Act, demonstrated a "moderate and statistically significant" correlation with legal expert opinions. This means that domain-specific tools can indeed provide a meaningful starting point for developers conducting initial inventory and high-risk classification assessment. However, a "moderate correlation" also implies that a human review mechanism with dual expertise is still essential before submitting documents to regulatory authorities or passing Conformity Assessment Technologies.

Key Finding 3: A Path Forward for SMEs' Compliance Cost Dilemma

The study emphasizes that the technical documentation requirements of the EU AI Act impose a disproportionate burden on SMEs. This is because hybrid talent with both legal and technical skills is extremely rare, and external legal counsel is expensive. The value of AI-assisted tools lies in their ability to quickly provide a "preliminary gap list," allowing companies to identify the scope of issues before formally engaging legal experts, thereby reducing the time and cost of legal consultation.

Strategic Implications for AI Governance Practices in Taiwan

Taiwanese companies are at the intersection of a triple regulatory pressure: the ISO 42001 international AI management system standard, the mandatory impact of the EU AI Act on exports to the EU market, and the domestic regulatory framework established by Taiwan's AI Basic Act, passed in 2024. The findings of this study directly influence the strategic choices Taiwanese companies make when establishing their technical documentation systems.

First, from a statutory interpretation perspective, Annex IV of the EU AI Act clearly lists the necessary items for the technical documentation of high-risk AI systems, while Taiwan's AI Basic Act also emphasizes transparency and accountability. ISO 42001 Clause 8.4 requires companies to establish a verifiable explanation mechanism. The common core of all three sets of regulations points to the fact that "documentation must genuinely reflect system risks, not just be a formality."

Second, the study's warning is particularly relevant for Taiwanese companies. In the initial stages of AI governance, many firms tend to use general-purpose tools like ChatGPT to quickly generate policy documents, believing they have completed their compliance obligations. However, the research data shows that this approach has systemic judgment biases and may create an illusion of superficial compliance. The Institute for Information Industry's Science & Technology Law Institute (STLI) has also clearly stated that establishing substantive compliance mechanisms, rather than just accumulating documents, is the only effective way to avoid the high fines associated with the EU AI Act.

Third, for Taiwanese tech companies planning to enter or already operating in the EU market, the EU AI Act will come into effect in stages between 2025 and 2026, leaving a very limited window for compliance for high-risk AI systems. Establishing a dual-layer document review mechanism of "AI tool-assisted screening + human expert review" is a pragmatic path to balance efficiency and compliance quality.

How Winners Consulting Services Helps Taiwanese Companies Establish a Dual-Layer Document Review Process

Winners Consulting Services Co., Ltd. assists Taiwanese companies in establishing AI management systems that comply with ISO 42001 and the EU AI Act, conducting AI risk classification assessments, and ensuring that artificial intelligence applications adhere to Taiwan's AI Basic Act. In response to the technical documentation compliance challenges revealed in this paper, we offer the following three-tiered action plan:

  1. Establish a Comprehensive AI System Inventory and Risk Classification Baseline (Months 1-2): Systematically inventory all AI applications within the enterprise according to Annex III of the EU AI Act and Clause 6.1.2 of ISO 42001, classifying them as prohibited, high-risk, or general-risk. This step determines the scope and depth of subsequent technical documentation and is a critical prerequisite to avoid misallocating compliance resources.
  2. Implement a "Tool-Assisted + Expert Review" Technical Documentation Workflow (Months 3-6): Based on the findings of this study, we do not recommend relying solely on general-purpose LLMs to generate technical documents. Instead, we suggest using compliance assistance tools designed for the EU AI Act for initial gap identification, followed by a manual review by consultants with dual legal and technical expertise to ensure the content substantively corresponds to the requirements of Annex IV of the EU AI Act and Clause 8.4 of ISO 42001.
  3. Establish a Regular Document Update and Internal Audit Mechanism (Months 7-12): AI systems iterate frequently, and static, one-time documents cannot meet regulatory expectations for continuous compliance. A version control and periodic review process should be established, combined with the continuous improvement requirements of ISO 42001, to ensure that technical documentation is updated in sync with system versions, creating a traceable compliance record.

Winners Consulting Services Co., Ltd. offers a free AI governance diagnostic to help Taiwanese companies establish an ISO 42001-compliant management system within 7 to 12 months.

Learn About Our AI Governance Services → Apply for a Free Diagnostic Now →

Frequently Asked Questions

Can AI tools like ChatGPT be used directly to draft technical documentation required by the EU AI Act?
It is not recommended to rely solely on general-purpose AI tools for drafting technical documentation. A 2025 experimental study (Sovrano et al.), using Rank Biserial Correlation analysis, revealed that ChatGPT 3.5 and 4 exhibit a systemic "overconfidence" issue, making overly definitive judgments without sufficient legal basis. This bias is particularly dangerous for the high-risk AI system documentation required by Annex IV of the EU AI Act. The best practice is to adopt a dual-layer mechanism: using specialized compliance tools for initial screening, followed by a thorough review by human legal and technical experts. This ensures verifiable compliance quality and avoids the pitfalls of superficial compliance.
What is the most common challenge for Taiwanese companies regarding technical documentation when implementing ISO 42001?
The most common challenge is having "documents that exist but fail to address substantive risks." Many companies prioritize fulfilling the formal requirements of policy documents during ISO 42001 implementation but neglect to substantively link the content to the high-risk classification criteria in Annex III of the EU AI Act, the verifiable explanation requirements of ISO 42001 Clause 8.4, and the transparency obligations of Taiwan's AI Basic Act. This results in a pile of documents rather than an accountable governance record. It is advisable to start with an inventory and high-risk classification assessment to identify which AI systems require EU-level documentation standards, then allocate resources accordingly to avoid misallocation.
What core documents are required for ISO 42001 certification, and what is the approximate timeline?
Core documents for ISO 42001 certification include an AI Policy Statement, AI Risk Assessment Report, AI System Inventory and Classification, AI Impact Assessment Records, AI Supply Chain Risk Management Procedures, and operational records for continuous monitoring and improvement mechanisms. For a medium-sized enterprise with an existing governance framework, the process from gap analysis to certification typically takes 7 to 12 months. A recommended three-phase approach is: complete the AI system inventory and risk classification in months 1-2; establish the document drafting and dual-layer review process in months 3-6; and conduct the certification audit and application in months 7-12. The timeline may vary based on the complexity of the company's AI systems and the maturity of its existing documentation.
Is implementing an EU AI Act compliance mechanism costly? Can AI-assisted tools effectively reduce costs?
Compliance costs can be significant, but they can be optimized through a structured approach. If drafting technical documentation for a high-risk AI system under the EU AI Act relies entirely on external legal counsel, the cost for a single system can range from thousands to tens of thousands of US dollars. According to the 2025 study by Sovrano et al., specialized AI tools designed for the EU AI Act can quickly identify documentation gaps in the initial phase, narrowing the scope of work for legal advisors and effectively reducing external consulting fees. However, the study notes these tools currently only have a "moderate correlation" and cannot fully replace expert judgment. The most cost-effective path is using AI tools for initial screening to lower preparation costs, then precisely engaging consultants to review high-risk gaps.
Why choose Winners Consulting Services for AI governance issues?
Winners Consulting Services Co., Ltd. offers three key advantages in AI governance. First, we are proficient in ISO 42001 international standards, EU AI Act cross-border compliance requirements, and Taiwan's domestic AI Basic Act, enabling us to help companies establish an integrated, multi-jurisdictional governance framework to avoid redundant costs. Second, our team of consultants possesses dual expertise in law and technology, providing integrated services in technical documentation, risk classification, and compliance mechanism design. Third, we offer comprehensive support, from a free initial diagnostic to full ISO 42001 certification guidance, helping businesses establish an internationally compliant AI management system within 7 to 12 months. This mitigates compliance risks and enhances trust capital in the EU market.

Was this article helpful?

Share

Related Services & Further Reading

Want to apply these insights to your enterprise?

Get a Free Assessment