Back to blog
Actualites

AI Use Presumption: What the French Senate Bill Means for Creators' Priority Evidence

On 8 April 2026, the French Senate adopted in first reading a bill creating a presumption of use of a protected work by an AI provider whenever a bundle of indicators makes that use plausible. Decoding the text, the reversal of the burden of proof, and what it means for creators who want to gather these indicators in advance.

11 min read
AI Use Presumption: What the French Senate Bill Means for Creators' Priority Evidence

An author tests a generative AI model and discovers it produces stylistic passages strikingly close to her latest novel, published two years earlier. Not exact wording, but sentence structures, lexical choices, a tone. She wants to take legal action. First obstacle: how does she prove that her book is part of the model's training data, when the provider has never published its list?

This asymmetry is at the heart of the bill adopted on 8 April 2026 by the French Senate in first reading. The text creates a presumption of use of a work by an AI provider whenever an indicator makes that use plausible. If the presumption applies, the burden falls on the AI provider to demonstrate that it did not use the work — not on the creator to prove the opposite.

For the first time in France, a legislative text frontally addresses the evidentiary lock that protected AI models in copyright disputes. The text is not yet law, but it clearly signals the direction taken by the legislator.

Why this bill, and why now?

Since the explosion of generative AI models (text, image, code, voice), one major legal question remained without a simple answer: how can a creator prove that a specific work belonging to them was used to train a model, when the provider publishes neither the composition nor the origin of its training datasets?

Three factors fuel the legislative pressure:

  • Model opacity. Providers of LLMs, image-to-image models, and voice generators do not disclose, save for exceptions, the list of sources used for training. This opacity makes proof of use almost impossible for an isolated creator.
  • European pressure. The AI Act (Regulation EU 2024/1689) now requires general-purpose AI model providers to publish a sufficiently detailed summary of training data (Article 53). This first step is not enough to ground evidence in an individual dispute.
  • Mounting litigation. In the United States, several lawsuits have pitted media outlets (New York Times, Getty Images), authors and artists against AI providers. In France and Europe, the first cases are emerging and hitting this evidentiary wall.

What the Senate text says

The bill (text adopted No. 85, 2025-2026 session) rests on a central mechanism: a rebuttable presumption.

Element of the mechanismContent
ScopeWorks and subject matter protected by the French Intellectual Property Code (copyright, related rights) used by an AI system provider
Trigger of the presumptionExistence of one or more indicators making use plausible
EffectReversal of the burden of proof: the AI provider must demonstrate it did not use the work, or that it had the right to do so
Nature of the presumptionRebuttable, i.e. capable of being overturned by contrary evidence
Articulation with the AI ActNational complement, no derogation from the European framework

The full text is available on the Senate website: Texte adopté n° 85 (TA 25-085).

i
Rebuttable, not absolute

A rebuttable presumption can be overturned by the opposing party. The AI provider therefore retains the possibility of demonstrating that it did not use the work — for instance by producing the actual list of its training sources, or by demonstrating that the work was not included. What changes is who must provide the evidence.

Why reversing the burden changes everything

Under French law, Article 1353 of the Civil Code lays down the general rule: "anyone who claims the performance of an obligation must prove it." In a classic copyright dispute, the creator must therefore prove the unauthorized use of their work. With a closed AI model, this burden is in practice impossible to bear.

The reversal does not eliminate the need to prove — it shifts it. The creator must still bring an initial plausible bundle of indicators. But once that threshold is crossed, it is up to the AI provider to produce the elements that only it possesses.

What indicators can trigger the presumption?

The text does not draw up an exhaustive list — it will be for the judges to qualify what constitutes an indicator "making use plausible." Several categories are already discussed in legal scholarship and the first disputes:

  • Stylistic or structural similarities between the model's outputs and the original work, attested by expertise
  • Regurgitation of passages close enough that use is plausible (verbatim or near-verbatim)
  • Metadata and signatures that the model sometimes lets through in its outputs (artifacts, watermarks)
  • Documented scraping traces (server logs showing massive accesses from IP ranges identified as belonging to AI crawler operators)
  • Publication date earlier than the model's release, cross-referenced with the public accessibility of the work at training time
  • Presence of a unique identifier (token, invented word, graphic signature) deliberately added by the creator and produced by the model

No single indicator will suffice automatically. It is the convergence of several that creates plausibility. This is exactly the "bundle of indicators" logic well known in French law.

What this changes concretely for creators

For authors, photographers, journalists, illustrators, screenwriters, musicians, and developers, the message is clear: the priority and authorship evidence you build today may become tomorrow the indicator that triggers the presumption.

Three reflexes become strategic:

  1. 1
    Timestamp the work upon creation (not only at publication)
    An electronic timestamp proves that on a precise date, a file identical to your current work existed. This is the basis of a bundle of indicators: demonstrating that at the time the model was trained, your work was already created and accessible.
  2. 2
    Keep original metadata and intermediate versions
    RAW files for photos, source files for designs and code, version history for texts (Git, Word, Drive). These elements document the creation chain and reinforce the credibility of the bundle.
  3. 3
    Document the publication chain and public accessibility
    Capture the date of publication, the URLs under which the work was accessible, any presence in public datasets. This makes it possible to demonstrate that the work was technically scrapable when the model was trained.

Typical use case: a published author

Imagine an author whose novel was published in 2023 and remained available in partial open access on the publisher's site. In 2026, an AI model released to the market in 2025 produces stylistically very close passages.

IndicatorSource
Manuscript creation dateElectronic timestamping of the original file (2022-2023)
Official publication dateLegal deposit, ISBN, press releases (2023)
Accessibility at training timeWayback Machine captures of the publisher's site (2023-2024)
Stylistic similaritiesExpertise comparing original manuscript and AI outputs
Documented scraping tracesPublisher server logs showing suspicious accesses

No single indicator alone proves use. But their convergence makes use plausible — which, under the proposed regime, would suffice to activate the presumption and shift the burden of proof to the AI provider.

And what about LegalStamp?

Blockchain timestamping is not a magic answer to the evidentiary problem in AI disputes. But it constitutes, within the bundle of indicators, a solid and hardly contestable piece: a cryptographic demonstration that this work, in this precise format, existed on this date — independently verifiable, at marginal cost.

Concretely, LegalStamp allows a creator to massively timestamp their works as they are produced:

  • At creation (manuscript in progress, photo series straight from the shoot, initial source code)
  • At each milestone (major revisions, submission to a publisher, delivery to a client)
  • Before publication (priority date before going live, ability to demonstrate public accessibility during a model's training window)

It is one tool among others — it does not replace metadata documentation, similarity expertise, or any scraping traces. But it provides the decentralized temporal anchor that makes the entire bundle of indicators more credible. And its marginal cost (cents per timestamp at volume) makes it compatible with a systematic strategy, where a bailiff deposit or a Soleau envelope would cost dozens of euros per work.

To understand how timestamping fits into a global priority evidence strategy, see our article Proof of Priority: 8 Methods Compared.

What remains unclear (and what to watch)

Several points remain to be clarified:

  • Parliamentary timeline. The text is only adopted in first reading by the Senate. It still has to be examined by the National Assembly, possibly subject to back-and-forth between chambers, and even a joint committee. The timeline depends on the agenda and political arbitrations.
  • Standard of the bundle of indicators. How many indicators, of what quality, will be needed to activate the presumption? The first rulings will give the answer.
  • Articulation with the AI Act. The European AI Act already requires providers to publish a summary of training data (Article 53). The French text complements, does not replace. Overall coherence will be tested by the judges.
  • Extraterritorial reach. Will an AI provider established outside the EU be able to be summoned before a French judge? The classic question of private international law will take growing importance.
  • Role of opt-outs. EU Directive 2019/790 allows rights holders to oppose text and data mining (TDM, Article 4). The proposed presumption could interact with these opt-outs — a creator who has expressed their refusal would have an additional argument.

Conclusion

The Senate vote of 8 April 2026 does not immediately change applicable law, but it raises a simple political question: in a world of opaque AI, who must prove what? The text decides in favor of creators and shifts the burden onto providers, as soon as a bundle of indicators makes use plausible.

For authors, photographers, journalists, and creators in general, the practical message is immediate: the priority evidence you build today is the indicator you will activate tomorrow. The older the trace, the more technical, the more independently verifiable, the more weight it carries in the bundle.

LegalStamp does not claim to solve the problem, but proposes the missing brick of a massive, low-cost, independently verifiable timestamping that fits into a global evidence strategy. The rest — metadata, expertise, publication chain documentation — remains your responsibility. But the solid temporal base is exactly what a blockchain timestamp is designed to produce.

FAQ

The text (TA No. 85, 2025-2026 session) creates a presumption of use of a work or protected subject matter by an AI system provider whenever a bundle of indicators makes that use plausible. The burden of proof then shifts to the AI provider, who must demonstrate the contrary or justify its use. The text has been adopted only in first reading by the Senate; it still has to be examined by the National Assembly.
No. The presumption does not trigger automatically; it requires a bundle of indicators that makes use plausible. The rights holder must bring this initial bundle (similarities, regurgitations, metadata, documented scraping traces). Once that threshold is met, it is up to the AI provider to prove that it did not use the work, or that it had a title to do so.
Generative AI providers generally do not publish the exact list of works used for training. For an author, journalist or photographer, it is currently almost impossible to formally prove that a specific work was used to train a model. The text aims to rebalance this asymmetry by transferring the burden of proof to the provider, who alone knows its training datasets.
Yes, indirectly. A timestamp proves that on a given date, a specific file existed with specific content. Combined with other indicators (date of publication prior to the model's release, similarities observed in the model's outputs, accessibility of the content on the web), it contributes to building the bundle of indicators the text requires the rights holder to provide.
No. As of 4 May 2026, the text has been adopted only in first reading by the Senate. It must now be examined by the National Assembly, possibly subject to back-and-forth between chambers, and promulgated to become law. The exact timeline depends on the parliamentary agenda. In the meantime, it is a strong political signal, not an applicable text.
The text covers 'works or protected subject matter' within the meaning of the French Intellectual Property Code, which broadly covers copyright and related rights: texts, images, photographs, videos, musical scores, software, original designs, etc. The precise nature of each work will of course be subject to case-by-case qualification by the judges.
Three reflexes: (1) systematically timestamp your original files with their exact creation date, (2) keep the original metadata (RAW for photos, source files for designs and code, manuscripts with version history for texts), (3) document the publication chain (where, when, under which license) to be able to demonstrate both priority AND accessibility of the work to an AI crawler.
The text can be amended, rejected, or have its adoption postponed. Even in the event of blockage, the debate will have publicly raised the question of the burden of proof in AI / copyright disputes. Several court rulings (notably in the United States) already follow convergent logics — the underlying movement does not depend on the French vote alone.

Disclaimer (general information): this article is provided for educational purposes and does not constitute legal advice. The bill discussed has, as of the publication date, only been adopted in first reading by the French Senate; its content may evolve during the parliamentary process. To bring an action based on the presumed use of a work by an AI system, have your strategy validated by a lawyer specialized in intellectual property.

Jeremy

Jeremy

Fondateur de LegalStamp, passionne par la blockchain et la protection des creations.

Share:

Related articles

Ready to protect your creations?

Create your first proof of priority for free in less than 30 seconds.