EU AI Act: August 2026 deadline pushed to December 2027 - what CFOs still need to do right now
EU AI Act: the May 7, 2026 deal postpones high-risk obligations to December 2027. What CFOs still need to do right now to stay ahead.

On May 7, 2026, the European Parliament and the Council of the EU reached a provisional political agreement on the Digital Omnibus, postponing the application of "high-risk" obligations under the EU AI Act (Regulation 2024/1689) from August 2, 2026 to December 2, 2027 for standalone Annex III systems (credit scoring, HR, essential services) and to August 2, 2028 for systems embedded in regulated products (Annex I). Three key points for CFOs: (1) the agreement must still be formally adopted before August 2, 2026, otherwise the original dates apply by default; (2) obligations already in force (prohibited practices, AI literacy, GPAI, deployer transparency under Article 50) remain unchanged; (3) the high-risk classification itself is not modified. Operational consequence: mapping, human oversight and audit trail must still be built starting now, targeting December 2027 as the likely date, August 2026 as the fallback.
⚠️ Important legal warning - read this before going further
The May 7, 2026 deal is a provisional political agreement, not law yet. Until it is formally adopted by Parliament and Council and published in the EU Official Journal before August 2, 2026, the original dates of Regulation (EU) 2024/1689 remain legally binding. If adoption is delayed, high-risk obligations automatically apply from August 2, 2026, with penalties.
Furthermore, even if the Omnibus is formally adopted, several obligations remain applicable on August 2, 2026 and are not postponed: Article 50 (deployer transparency), penalties, governance, and the prohibited practices already in force since February 2025.
For CFOs: the August 2026 roadmap still has to be executed for the "not postponed" block, the December 2027 roadmap is added for the "high-risk postponed" block.
Updated timeline (post-Digital Omnibus, May 2026)
| Obligation | Applicable date | Legal status |
|---|---|---|
| 🟢 Prohibited practices (Art. 5), AI literacy (Art. 4) | February 2, 2025 | Already in force — not affected by the Omnibus |
| 🟢 GPAI model obligations (OpenAI, Anthropic, Mistral, Meta) | August 2, 2025 | Already in force — not affected by the Omnibus |
| 🟢 Deployer transparency (Art. 50), penalties, governance | August 2, 2026 | Maintained — not postponed by the Omnibus |
| 🟡 Watermarking of AI content (Art. 50(2)) | December 2, 2026 | Brought forward by the Omnibus (if adopted) |
| 🔴 Standalone high-risk systems (Annex III) | December 2, 2027 | Postponed from 2026 — subject to formal Omnibus adoption before August 2, 2026 |
| 🔴 High-risk systems in regulated products (Annex I) | August 2, 2028 | Postponed from 2027 — subject to formal Omnibus adoption |
Legend: 🟢 Confirmed and not postponed · 🟡 Modified by the Omnibus · 🔴 Postponed, subject to formal adoption
Source: Council of the EU, European Parliament, EU Digital Strategy (provisional political agreement, May 7, 2026).
What is the AI Act Digital Omnibus?
The Digital Omnibus on AI is a legislative package proposed by the European Commission on November 19, 2025 to amend the application timeline of EU Regulation 2024/1689 (AI Act). On May 7, 2026, a provisional political agreement was reached between Parliament and Council. It postpones high-risk obligations but does not modify the substance of the regulation: the four-tier classification (prohibited / high / limited / minimal), penalties and overall architecture remain unchanged.
The postponement was decided in response to a pragmatic observation: the CEN-CENELEC harmonized standards needed for compliance are not ready, and national competent authorities have not all been designated.
Warning: the May 7 deal is not yet law
The May 7, 2026 agreement is a provisional political agreement. For it to apply:
- It must be formally adopted by Parliament and Council.
- It must be published in the EU Official Journal.
- Adoption must occur before August 2, 2026, otherwise the original dates apply automatically.
Practical takeaway for CFOs: do not let off the gas. The safest strategy is to prepare as if August 2026 were real, plan as if December 2027 were the likely date.
Why CFOs remain on the front line despite the postponement
The finance function is one of the heaviest internal AI consumers: B2B and B2C credit scoring, fraud detection, automated payment decisions, collection chatbots, expense classification. Several of these uses fall under the "high-risk" Annex III category.
Three reasons not to ease up:
- Transparency obligations (Article 50) still apply on August 2, 2026. This covers informing customers of automated decisions, directly relevant to AI dunning, credit refusals, delivery suspension.
- Watermarking arrives earlier than expected: December 2, 2026 for AI-generated content (instead of February 2027 in the initial proposal).
- 21 months go fast. Mapping + governance + audit trail takes 6 to 9 months for a mid-market company. Waiting until 2027 = scrambling compliance with automatic penalties on slippage.
Which finance AI uses fall under "high risk"?
Annex III classification is unchanged by the Digital Omnibus. Three frequent cases in finance:
- Creditworthiness assessment / credit scoring of natural persons. Covers B2C fintech and B2B scoring coupled with a personal guarantee.
- Management of and access to essential services: automated systems deciding payment plans, delivery suspension or formal notices may fall here per ACPR interpretation.
- HR decision support: CV screening, performance evaluation, bonus calculation. Concerns CFOs piloting AI payroll / bonus tools.
Note:
- Pure fraud detection (AML, KYB) is not automatically high-risk, but it is if it triggers customer exclusion without human recourse.
- Pure B2B credit scoring (entity-only) is not high-risk, but remains subject to Article 50 transparency.
The four obligations to start building now
Obligation 1 - Map AI uses
First step is documentary: produce a register of deployed AI systems. For each use, specify purpose, vendor, processed data, risk category (prohibited, high, limited, minimal) and the company's role (provider or deployer).
A mid-market CFO typically identifies 8 to 15 finance AI systems: credit scoring, dunning agents, invoice OCR, expense classification, payroll HR chatbot, Excel copilot, email copilot, AP fraud detection, ChatGPT prompts for reporting.
This step does not depend on harmonized standards: it can start now.
Obligation 2 - Organize human oversight
Article 14 of the AI Act requires effective human oversight on high-risk systems. Concretely: for any decision with material impact (credit refusal, formal notice, delivery suspension), a qualified human must be able to intervene, understand and overturn it.
Two models coexist:
- Human in the loop: systematic human validation before action.
- Human on the loop: sample-based post-validation.
ACPR recommends at minimum an "on-the-loop" model with minimum review rate, traceability of overturned decisions and continuous training.
Obligation 3 - Document the technical audit trail
The AI Act requires technical documentation (Annex IV) covering: system architecture, training data (provenance, detected biases, representativeness), measured performance, robustness metrics, risk management plan, log files.
For a "deployer" CFO (using a system without building it), documentation is the provider's responsibility, but the deployer must keep its own usage logs and AI Act impact analysis (extended DPIA).
Obligation 4 - Ensure transparency toward affected persons
Article 50 - applicable on August 2, 2026, not postponed by the Omnibus. Any person subject to an automated decision must be informed. In practice:
- A customer receiving an AI-generated dunning email must be able to know AI is involved.
- A company refused a payment extension by an automated system must be able to request human review.
- AI-generated content (reports, emails) must be identifiable as such (watermarking by December 2, 2026).
This is the obligation most immediately operational for CFOs, and the one not postponed.
What penalties apply?
AI Act penalties dwarf GDPR and are not modified by the Omnibus:
- €35M or 7% of global revenue (whichever higher) for use of a prohibited system.
- €15M or 3% of global revenue for breaches of high-risk obligations.
- €7.5M or 1% of global revenue for inaccurate information to the authority.
Omnibus novelty: reduced thresholds previously reserved for SMEs are extended to small mid-cap enterprises (SMCs), companies with under 750 employees. Simplified documentation and proportional penalties.
Competent authorities in France: CNIL (data), ACPR (finance), DGCCRF (consumer), Arcom (media).
Revised roadmap: 90 days to lay foundations, 18 months to execute
The postponement does not change the initial roadmap, it stretches it. Recommended split for CFOs starting in May 2026:
Phase 1 - Foundations (May → August 2026, 90 days)
- D+0 to D+30: internal audit of AI uses. Exhaustive listing, high-risk vs limited-risk classification. Identify role (provider or deployer). Reconcile with existing GDPR inventory.
- D+30 to D+60: governance. Appoint an AI lead (often within DPO or compliance). AI usage policy for finance teams. Human oversight charter. Update DPIAs and sign vendor contracts.
- D+60 to D+90: operations. Article 50 compliance (customer transparency, applicable August 2, 2026). Roll out a shared log register. Train finance managers.
Phase 2 - Ramp-up (Sep 2026 → Dec 2026)
- Prepare watermarking of AI-generated content (December 2, 2026 deadline).
- Begin technical documentation of high-risk systems (Annex IV).
- Test human intervention procedures.
Phase 3 - High-risk compliance (2027)
- Finalize Annex IV documentation based on published CEN-CENELEC harmonized standards.
- High-risk system conformity assessment.
- Registration in the EU database (for providers).
- Final deadline: December 2, 2027 for standalone Annex III systems.
Four traps to avoid
Trap 1 - Believing the postponement cancels obligations. The May 7 deal is not yet law. Transparency (Article 50) and prohibited practices (Article 5) remain applicable, and harmonized standards will be published between 2026 and 2027.
Trap 2 - Underestimating the "deployer" role. Many companies assume the AI Act only applies to providers (OpenAI, Mistral, Anthropic). False. The deployer, you, as a user of ChatGPT Enterprise or Cleavr, has its own obligations on oversight and transparency.
Trap 3 - Stacking frameworks. AI Act, GDPR, DORA, CSRD, NIS 2. Don't create 5 separate committees: a single compliance committee with AI sub-tracks works better.
Trap 4 - Confusing "limited risk" with "no risk". Customer chatbots and content generators are "limited risk", they still require transparency and opt-out.
FAQ - The questions CFOs ask after the May 7 deal
Is the AI Act suspended until 2027?
No. Only the obligations specific to high-risk systems (Annex IV technical documentation, conformity assessment, registration) are postponed from August 2, 2026 to December 2, 2027. Prohibited practices (Article 5), AI literacy (Article 4), GPAI obligations and deployer transparency (Article 50) remain in force or applicable on August 2, 2026.
Is the postponement final?
No. The May 7, 2026 agreement is a provisional political agreement that must still be formally adopted and published in the EU Official Journal before August 2, 2026. If adoption is delayed, the original dates apply.
What are the new dates after the May 7 deal?
Standalone high-risk systems in Annex III (credit scoring, HR, essential services, biometrics, etc.) shift from August 2, 2026 to December 2, 2027. Systems embedded in regulated products in Annex I (medical devices, machinery, toys) shift from August 2, 2027 to August 2, 2028. Watermarking of AI content is brought forward to December 2, 2026.
What risk classification does the AI Act use?
The AI Act classifies systems into 4 tiers: prohibited (social scoring, real-time biometric ID), high-risk (Annex III: HR, credit, essential services, biometrics, etc.), limited risk (chatbots, content generators), minimal risk (spam filters, games). Classification is not modified by the Digital Omnibus.
Does pure B2B scoring fall under high risk?
No, unless it targets a natural person (personal guarantee). Entity-only scoring remains subject to Article 50 transparency.
Does ChatGPT Enterprise make us compliant?
No. ChatGPT Enterprise is compliant on the provider side. You remain a deployer and must document your usage, cases and oversight.
Is our AI code of conduct enough?
No. A code of conduct does not replace per-system risk analysis or Annex IV documentation.
Do we need an AI DPO?
Not mandatory but recommended. Often attached to the existing DPO or compliance function.
Which authorities are competent in France?
Four authorities share supervision: CNIL (data), ACPR (financial services), DGCCRF (consumer), Arcom (media).
What if we're a scale-up < 50 people?
Obligations apply regardless of size. Omnibus novelty: relaxed thresholds extend to companies with under 750 employees ("small mid-cap enterprises" / SMC). Simplified documentation and proportional penalties.
Conclusion: the timeline changes, the stakes don't
The AI Act is not Y2K. It is a permanent governance framework reshaping how the finance function deploys AI. The postponement from August 2026 to December 2027 is operational breathing room, not a course change. Winning companies will not be those who waited for the deadline, but those who built a pilotable AI governance starting now: up-to-date inventory, defensible classification, documented human oversight, effective customer transparency.
Official sources and references
- Regulation (EU) 2024/1689 — AI Act, consolidated text
- Council of the EU — May 7, 2026 press release on AI Act provisional agreement
- EU Digital Strategy — Navigating the AI Act (official FAQ)
- EU Digital Strategy — AI regulatory framework (updated May 2026)
- Service Public Entreprendre — AI Act and businesses
- ACPR — AI in financial services
- CNIL — AI and compliance
Article updated May 15, 2026. For any post-adoption changes, refer to the EU Official Journal.