A timeline that has become unstable

The EU AI Act entered into force on 1 August 2024. Its application is staggered over several years, with successive milestones. Prohibited practices (Article 5) and the AI literacy obligation (Article 4) have applied since 2 February 2025. The obligations applicable to providers of general-purpose AI models (GPAI) have applied since 2 August 2025. The next major milestone — the one covering "high-risk" systems listed in Annex III — was set for 2 August 2026.

That deadline is no longer entirely settled. On 19 November 2025, the European Commission published the "Digital Omnibus on AI", a legislative package proposing to defer the entry into force of high-risk obligations to 2 December 2027 for stand-alone systems and to 2 August 2028 for systems embedded in products. The official rationale is the absence of harmonised standards and operational guidance, which is preventing operators from achieving compliance on schedule. The second trilogue between the Parliament, Council and Commission took place on 28 April 2026 without an agreement, and a further round was scheduled for 13 May 2026. If the Omnibus is not formally adopted before 2 August 2026, the original timeline applies as written.

For a private credit fund deploying or evaluating legal AI tools, this uncertainty has only one pragmatic reading: prepare as if the August 2026 deadline holds, and treat any postponement as a reprieve, not a dispensation.

The logic of the regulation: risk, not technology

The AI Act does not regulate artificial intelligence in the abstract. It classifies systems by their level of risk to fundamental rights and safety, and applies obligations proportionate to that risk. Four levels are defined: unacceptable risk (prohibited practices), high risk (Annex III and certain regulated products), limited risk (transparency obligations) and minimal risk (no specific obligations).

This logic matters because it shifts the question. A legal AI tool is not high-risk because it uses a powerful language model, or because it automates an analyst's task. It is high-risk if it falls within one of the use-case categories listed in Annex III, or if it is integrated as a safety component into a product regulated by the Union's harmonised legislation.

What makes a system "high-risk" in finance

Annex III lists eight high-risk use-case domains. In finance, the central case is unambiguous: AI systems intended "to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud" are classified as high-risk. The wording centres on natural persons: it is this precision that determines most of the perimeter for private credit.

Private credit funds lend to legal entities — sponsor-backed companies, growth-stage businesses, acquisition holdings. The tools used to analyse those transactions — credit agreement extraction, legal due diligence, term sheet comparison, covenant monitoring, NDA review — do not assess the creditworthiness of natural persons. They process corporate contractual data. On that basis, they do not fall into the high-risk category by default.

This reading is supported by commentary from several specialist firms. Goodwin noted in summer 2024 that the high-risk classification in finance primarily targets retail credit-scoring systems and tools assessing eligibility for financial services, not the document analysis tools used in B2B contexts. Several borderlines remain to watch, however: transactions backed by personal guarantees, structures involving natural persons as co-borrowers, or the rare cases of loans to sole traders. In those scenarios, a tool whose output directly feeds a creditworthiness decision on a natural person could shift into the Annex III perimeter.

The self-assessment obligation, and its trap

The regulation does not only require providers to comply with high-risk obligations where they apply. It also requires them to document their classification, whether the conclusion is high-risk or not. Article 6(3) introduces a useful derogation: a system that falls within the Annex III use cases may be deemed not high-risk if it does not pose a significant risk of harm to health, safety or fundamental rights — for instance because it performs a narrow procedural task, improves the result of a human activity without replacing it, or prepares an assessment without deciding it. This derogation must be supported by a documented assessment retained and made available to supervisory authorities.

For a fund using a legal tech tool on transactions marginally connected to natural persons, the issue is less about proving the complete absence of any link to Annex III than about producing a reasoned assessment of the preparatory or non-decisional nature of the tool. Documentation is part of compliance itself, even when the conclusion is that no high-risk obligation applies.

Obligations that apply regardless of the high-risk classification

The attention focused on 2 August 2026 often obscures the fact that a significant portion of the regulation is already in force and applies to all professional users of AI, regardless of high-risk status.

Article 4 has, since 2 February 2025, imposed an AI literacy obligation: providers and deployers must take measures to ensure a sufficient level of AI literacy among their staff, taking into account their roles, their prior training and the context of use. For a private credit fund deploying a data extraction tool or NDA analysis tool to its analysts and legal team, this implies a documented training programme covering capabilities, limitations, sources of error and the human validation framework.

Article 5 lists the prohibited practices, applicable to all since February 2025: subliminal manipulation, exploitation of vulnerabilities, social scoring, prediction of crime based solely on profiling, certain uses of biometric data. None of these correspond to a normal use case in private credit, but vendor due diligence must check this formally.

Since 2 August 2025, providers of general-purpose AI models (GPAI) have had their own obligations: technical documentation, copyright compliance policy, summary of training data. These do not directly bind funds that consume those models through third-party tools, but they cascade into due diligence: a legal tech provider relying on a GPAI must be able to demonstrate that its upstream supplier is itself compliant.

Transparency obligations for limited-risk systems also apply: labelling of AI-generated content, clear notification when a user is interacting with a chatbot, marking of deepfakes. These obligations may apply marginally to certain features of legal tech tools — for instance an embedded conversational assistant or assisted clause generation.

Extraterritorial reach, an often-overlooked point

The AI Act applies to providers and deployers established in the Union, but also to actors established outside the EU whose AI outputs are used within the Union. For a European fund using an American or British legal tech tool, this means that the foreign provider is legally bound to comply as soon as its tool is deployed in the European market. A growing body of legal commentary — including a note published in April 2026 by Holland & Knight — observes that non-European legal tech players still largely underestimate this exposure.

In practice, this reinforces a point already central to vendor due diligence: a European fund must obtain from each provider a compliance attestation, documentation of risk assessments, and a contractual clause covering any non-compliance.

The AI Act does not ask private credit funds to stop using legal AI. It asks them to know what they use, why, with which safeguards, and to be able to demonstrate it.

A four-step roadmap

For a fund preparing for the August 2026 deadline — or December 2027 if the Omnibus is adopted — the structure is clear and replicable.

The first step is the inventory of AI systems in use. This includes dedicated tools (credit agreement extraction, term sheet comparison, covenant monitoring, NDA review, legal due diligence) but also the AI features embedded in broader tools (Microsoft Copilot, AI features in productivity suites, assistants embedded in ERPs). Tools such as MyClauze, Ontra, Kira or Definely belong in this inventory alongside the AI components of portfolio management platforms.

The second step is the documented classification of each system against Annex III. The vast majority of tools used in B2B private credit will land at a non-high-risk classification, but that conclusion must be supported in writing, drawing on the Article 6(3) derogation when the use case sits close to an Annex III scenario.

The third step is updating vendor contracts. Compliance attestations, commitments to notify material changes to the system, clauses on training data and on the subcontracting of GPAI models must appear explicitly in contracts. For existing tools, this means a renegotiation cycle or the signing of amendments.

The fourth step is training and governance. AI literacy is not a formality: a documented user training programme, an internal incident-reporting procedure, an identified AI lead, and documentation of validated use cases form the minimum expected baseline.

A framework, not a brake

Read as a whole, the AI Act does not introduce a disproportionate barrier to the use of legal tech tools by private credit funds. It introduces a documentary discipline that formalises good practices that serious players already follow: knowing the tools in use, knowing where the line falls between assisted task and automated decision, maintaining a human validation loop, contractualising provider commitments.

The risk for a fund is not so much being caught short on a high-risk obligation that probably does not apply to its tools. The risk is reaching August 2026 — or December 2027 — with no inventory, no documented classification, no training rolled out. That risk is more acute given that the regulation's penalties reach up to €35 million or 7% of worldwide turnover for prohibited practices, and €15 million or 3% for other infringements.

The timeline is long enough to address these steps without rushing. At the current pace of trilogues, it is short enough that a fund which has not started is already behind.

Legal AI built for B2B private debt

MyClauze helps private credit funds automate document review within an AI-Act-compatible framework.

Learn more