Print
See related documents
A. Draft resolution
(open)
B. Explanatory memorandum
by Mr Mogens Jensen, rapporteur
(open)
Report | Doc. 16374 | 01 April 2026
Copyright enforcement in the artificial intelligence environment
Committee on Culture, Science, Education and Media
A. Draft resolution 
(open)1. The Parliamentary Assembly
stresses that intellectual property rights are a fundamental catalyst
for innovation and investment across diverse industry sectors.
2. The Assembly values the fact that cultural and creative industries
represent a significant economic force in Europe, with a workforce
numbering in the millions. These industries depend on copyright
law for the protection of the rights and interests of authors and
other rightsholders, as well as for the remuneration of their creative
works and contributions.
3. The Assembly acknowledges that the emergence of the artificial
intelligence (AI) era has given rise to a particularly challenging
set of problems for the creative sector.
4. In order to feed their data-hungry systems, AI companies are
scraping the internet without prior permission and without remunerating
content creators on the basis of legislative provisions that are
neither clear-cut nor fit for purpose.
5. AI training requires the making of copies at different stages:
from the initial web scraping and the creation, online publication
and downloading of datasets, to the actual AI training with those
datasets and the use of the resulting mode. Those copies, if copyright-protected,
are acts of reproduction and would require the authorisation of
the relevant rightsholders unless there are covered by a copyright
exception or limitation.
6. The Assembly notes that current European Union legislation
does not offer a solution to this problem. The so-called “text and
data mining (TDM) exceptions”, which were adopted before the advent
of generative AI, place the onus on copyright holders to opt out
of this exception and do not include any remuneration provisions.
7. Moreover, there are doubts regarding the applicability of
the text and data mining exceptions to generative AI, in particular
since these exceptions must comply with Article 5(5) of Directive
2001/29/EC which states that limitations or exceptions apply only
to “certain special cases which do not conflict with a normal exploitation
of the work or other subject-matter and do not unreasonably prejudice
the legitimate interests of the rightsholder”.
8. The Assembly underlines that such legal environment is particularly
advantageous especially for US and Chinese companies. Without a
level playing field, innovation and competition in Europe will suffer.
In the absence of fairness, existing disparities in wealth and power
will be exacerbated. Unfortunately, the present legal system is
incapable of rectifying market failure, as regulators and smaller
competitors lack the financial resources to match the billion-dollar
legal expenditure of tech giants. Furthermore, judicial proceedings themselves
are inadequate for addressing public goods, such as trustworthy
information and digital infrastructure, or externalities, including
disinformation and environmental damage.
9. In particular, the sustainability of the news media ecosystem
may be at risk due to the immediate, short-lived economic value
of news content. It is important to note that lengthy litigation
is not an effective solution to loss of revenue when platforms use
that content without fair payment. During protracted legal proceedings, platforms
can generate advertising revenue and consolidate their market position,
while publishers forfeit revenue that is irrecoverable in the long
term.
10. Otherwise, the impressive capabilities of generative AI tools
in generating new content give rise to other copyright-related issues.
11. There is an ongoing legal debate about the copyrightability
of works created using AI tools, and about who would hold the rights
resulting thereof. Whereas it seems obvious that an AI tool cannot
be a holder of rights, a case-by-case analysis might be required
to determine whether a work created with the intervention of an
AI tool can have a physical person as an author.
12. In any event, it is important to note that content generated
by AI systems that is based on copyrighted material has the potential
to infringe the rights of reproduction, communication and making
available to the public of copyright holders.
13. GenAI content aiming at misleading people (the so-called deepfakes)
are a subject of widespread concern. Deepfakes are not inherently
harmful and can be used for purposes that are legal, such as parody. However,
if misused, they can be used for disinformation purposes and to
manipulate public opinion in electoral processes, and can violate
personality rights by misusing a person’s image and voice. This
violation of personality rights can be particularly harmful when
it comes to the image of minors.
14. In light of all these considerations, the Assembly calls on
the Council of Europe member States to adopt a regulatory approach
that balances the rights and interests of AI providers and rightsholders
so that innovation is not achieved at the expense of creators, and
to protect citizens and democracy at large against the abuse of
AI tools. In this respect, they should in particular:
14.1. clarify in their national legislation
that copyright exceptions such as the text and data mining exceptions
introduced by the European Union’s Directive (EU) 2019/790 on copyright
and related rights in the Digital Single Market are not applicable
to the training of AI systems;
14.2. sign and ratify the Council of Europe Framework Convention
on Artificial Intelligence and Human Rights, Democracy and the Rule
of Law (CETS No. 225) and adopt or maintain measures to ensure that adequate
transparency and oversight requirements are in place to facilitate
the possibility for parties with legitimate interests, including
copyright holders, to exercise and enforce their intellectual property
rights;
14.3. require that providers of AI systems disclose training
data so that rightsholders can assert their rights and provide substantiation
in a court of law for any unauthorised utilisation of their content;
14.4. introduce a rule in their national legislations whereby
it would be presumed that commercial AI systems have been trained
on copyright-protected material in cases where transparency requirements are
not met;
14.5. introduce, in their national legislation, fair remuneration
rules based on an independent valuation for cases where access to
data is not possible, and support collective licensing schemes in
this regard;
14.6. introduce a compulsory final offer arbitration model that
allows a negotiating party to make a request for binding arbitration
to the relevant national ministry when another of the parties has
broken off negotiations, refused a request for negotiations, or
when negotiations do not seem likely to lead to a result;
14.7. require that content generated by AI systems is disclosed
as such by appropriate labelling that is machine-readable, interoperable,
and easily identified by human beings;
14.8. require that the unauthorised distribution of realistic
digitally generated imitations of personal characteristics is considered
illegal in their national legislation;
14.9. require that performers and artists are protected against
the unauthorised distribution of realistic digitally generated imitations
of their performances or artistic achievements;
14.10. promote media and information literacy, and invest in
media and civic education programmes to uphold critical thinking,
especially with regard to the AI tools and the understanding of
its output.
15. The Assembly calls on AI providers to provide transparency
of data used for AI training, and show openness to dialogue and
good will in negotiations with rightsholders.
B. Explanatory memorandum
by Mr Mogens Jensen, rapporteur 
(open)1. Introduction
1. Since Gutenberg's invention
of the printing press, new technologies have challenged and transformed the
production and exploitation of creative content. These include the
phonogram, the tape recorder, CDs, DVDs, Blu-ray discs, streaming
services, and the internet in general. Each of these developments
has required legislative adaptation to ensure that the rights of
those involved in the creative process are not disadvantaged or
simply ignored. Depending on the historical period and the technology
in question, this has led either to the reinforcement of exclusive
rights or to the introduction of remuneration rights, whichever
better balanced the rights and interests of creators and users.
2. The advent of the artificial intelligence (AI) era has brought
a particularly challenging set of problems. Intellectual property
owners no longer control their content, nor can they protect it.
In order to feed their data-hungry systems, AI companies are scraping
the internet without any permission and without remunerating content
creators. Consequently, these companies leverage their dominant
position to exert undue influence over access to information through
content moderation, censorship, algorithmic filtering, and model
training biases.
3. This unregulated environment is particularly advantageous
for US and Chinese companies. Without a level playing field, innovation
and competition in Europe will suffer. In the absence of fairness,
existing disparities in wealth and power will be exacerbated. Unfortunately,
the present legal system is incapable of rectifying market failure,
as regulators and smaller competitors lack the financial resources
to match the billion-dollar legal expenditure of tech giants. Furthermore,
judicial proceedings themselves are inadequate for addressing public
goods, such as trustworthy information and digital infrastructure,
or externalities, including disinformation and environmental damage.
This situation thus demands a new approach.
4. In line with the motion for a resolution on “Upholding our
diverse culture, freedom of expression and information by effective
enforcement of copyright” (Doc. 16165), which was referred to the Committee on Culture, Science,
Education and Media, my report examines how intellectual property
rights can be effectively enforced and proposes concrete lines of
action for strengthening freedom of expression and information.
2. Opportunities and risks of the use of AI systems in the cultural field and in protecting freedom of expression and information
5. Generative AI (GenAI) systems
are able to create original content (text, audio and video) after
being given instructions (so-called “prompts”) by a user of the
system. In order to achieve these results, GenAI systems must be
trained with a vast amount of data, including copyrighted material.
6. The applications of GenAI for the creative sector are numerous
and raise relevant legal questions not only in the field of copyright,
but also regarding personality
rights, labour
law, disinformation, and could have an enormous
impact on the media ecosystem. This report, however, focuses on copyright-relevant
aspects of AI training and content generation.
7. Copyright protection is intended to safeguard the rights of
authors and other rightsholders in their works or subject-matter
for a limited period of time. However, there are certain exceptions
or limitations to this protection that do not conflict with a normal
exploitation of the work and do not unreasonably prejudice the legitimate
interests of the author.
8. Regarding the applicability of copyright to AI, there are
two sets of problems: those related to AI training and those related
to AI output.
9. In a nutshell, AI training requires the making of copies at
different stages: from the initial web scraping and the creation,
online publication and downloading of datasets, to the actual AI
training with those datasets and the use of the resulting mode.
Those copies are copyright-relevant
acts of reproduction and require the authorisation of the author
unless there are covered by a copyright exception or limitation.
As I explain below, so-called exceptions for text and data mining
may apply to instances of copying in the framework of AI training.
10. The creation of works with the help of AI tools raises legal
questions regarding the authorship of the work and the liability
incurred when AI output amounts to plagiarism.
3. International legislation regulating AI and intellectual property rights
3.1. Standard-setting of the Council of Europe
11. The Council of Europe Framework
Convention on Artificial Intelligence and Human Rights, Democracy and
the Rule of Law (STCE No. 225) aims to ensure that activities within
the lifecycle of AI systems are fully consistent with human rights,
democracy and the rule of law, while being conducive to technological
progress and innovation. The Framework Convention does not explicitly
regulate intellectual property rights, although its explanatory
report mentions that AI requires “appropriate safeguards in the
form of transparency and oversight mechanisms” and that such transparency
could “facilitate the possibility for parties with legitimate interests, including
copyright holders, to exercise and enforce their intellectual property
rights”.
12. The Framework Convention is complemented by sector-specific
work throughout the Council of Europe. Most Council of Europe committees, intergovernmental
bodies and specialised bodies, as well as monitoring structures,
are considering the impact of AI on their respective field of activity.
13. The Council of Europe, in its Guidelines
given the latest technological developments, such as AI, complementing
Council of Europe standards in the fields of culture, creativity
and cultural heritage adopted by the Steering
Committee for Culture, Heritage and Landscape (CDCPP), states that our understanding of what constitutes creativity
and our existing mechanisms for nurturing, protecting, and rewarding
it are being challenged by genAI. The ongoing economic viability
of the creative industries and media sectors may also be impacted
by the use of copyrighted data for AI model training, with many
creators calling for fair remuneration for their work, control over
where it is used by AI firms, and transparency over how data is
collected and processed.
14. According to the guidelines, a number of issues would require
clarification by policy makers, specifically:
- What should constitute an infringement when data protected by copyright is used without authorisation? Should it be subject to copyright exception, and in what circumstances?
- Should copyright protection be granted to AI-generated works, or should a human creator be required? In whom should the copyright be vested if copyright is attributed to AI-generated works?
- What information should be made public by suppliers of AI to enable rightsholders to exercise their rights when their content is being used? How can AI suppliers and developers look to enhance their transparency over AI model inputs (training datasets) and outputs (e.g. watermarking AI generated content)?
- Which tools or appropriate labelling should be adopted to inform the public about the use of AI systems in order to avoid deepfakes and manipulation of reality?
15. According to the Guidelines, in order to build trust in the
use of AI, member States should safeguard the interest of authors
of copyright-protected works and other rightsholders by:
- ensuring copyright rules to protect rightsholders' interests; this includes, but is not limited to, mechanisms to ensure that rightsholders can exercise their rights when copyright-protected works are used to train AI systems, while encouraging AI providers to fulfil transparency obligations towards rightsholders;
- strengthening the role of libraries in safeguarding copyright in the age of AI;
- providing exceptions to copyright for educational and research purposes to facilitate access to data for non-commercial purposes.
3.2. UNESCO
16. In November 2021, the United
Nations Educational, Scientific and Cultural Organization (UNESCO) adopted
a Recommendation
on the Ethics of Artificial Intelligence, in which it recommended that member States foster new
research at the intersection between AI and intellectual property,
for example to determine whether or how to protect with intellectual
property rights the works created by means of Al technologies. Moreover,
member States should also assess how AI technologies are affecting
the rights or interests of intellectual property owners, whose works
are used to research, develop, train or implement AI applications.
3.3. European Union Law
17. Contrary to the legal instruments
described above, EU legislation has hard-law instruments that regulate the
relationship between AI and copyright, albeit in a non-satisfactory
manner.
18. The EU
Directive on copyright and Digital related rights in the Single
Market (CDSM) (Directive (EU)2019/790) aims to harmonise EU law applicable
to copyright and related rights in the framework of the internal
market, taking into account, in particular, digital and cross-border
uses of protected content. It also lays down rules on exceptions
and limitations to copyright and related rights, on the facilitation
of licences, as well as rules which aim to ensure a well-functioning
marketplace for the exploitation of works and other subject matter.
19. It is important to note that the CDSM was adopted before the
advent of genAI and therefore its rules, and notably its provisions
relating to text and data mining, were ill-suited to the current
AI environment from the start.
20. “Text and data mining” (TDM) is defined in the CDSM as “any
automated analytical technique aimed at analysing text and data
in digital form in order to generate information which includes
but is not limited to patterns, trends and correlations” (Article
2(2) CDSM).
21. The CDSM contains two exceptions on TDM:
- Article 3 provides for an exception to database rights, reproduction rights, and press publications rights for reproductions and extractions made by research organisations and cultural heritage institutions in order to carry out, for the purposes of scientific research, TDM of works or other subject matter to which they have lawful access;
- Article 4 provides for an exception or limitation to the rights mentioned above and to distribution rights for reproductions and extractions of lawfully accessible works and other subject matter for the purposes of TDM. This exception or limitation applies on condition that the use of the protected works has not been expressly reserved by their rightsholders in an appropriate manner, such as machine-readable means in the case of content made publicly available online (so-called “opt-out right”).
22. There is an academic discussion about the applicability of
the TDM exceptions to GenAI, in particular since both exceptions
must comply with Article 5(5) of Directive 2001/29/EC, which states
that limitations or exceptions apply only to “certain special cases
which do not conflict with a normal exploitation of the work or other
subject-matter and do not unreasonably prejudice the legitimate
interests of the rightsholder”. 
23. Furthermore, Article 4 CDSM does not provide clear guidance
with regards to what are “lawfully accessible works” and how an
express reservation in an appropriate manner is to be made. 
24. In any case, if these exceptions were to apply to GenAI, they
would certainly not provide adequate protection for intellectual
property rights holders, particularly given that the TDM rules place
the onus on copyright holders to opt out of the TDM exception and
do not include any remuneration provisions.
25. A newer legal instrument, the EU’s
AI Act does not deal directly with intellectual property rights,
but makes reference to the relevant EU copyright rules (see above),
and highlights the importance of providing transparency on the data
that is used in the pre-training and training of general-purpose
AI models “to facilitate parties with legitimate interests, including
copyright holders, to exercise and enforce their rights under Union law”.
26. On 18 July 2025, the European Commission published its Guidelines
on the scope of obligations for providers of general-purpose AI
models under the AI Act. These guidelines aim to increase legal clarity and
to provide insights into the Commission’s interpretation of the
provisions regarding general-purpose AI systems, in light of their entry
into application on 2 August 2025. They are part of the package that prepares the application
of the rules for providers of general-purpose AI models. The package
will contain these guidelines, the General-Purpose
AI Code of Practice and the adequacy assessment by the Commission and the
AI Board, the Template
for general-purpose AI model providers to summarise their training
content, and further elements, like a template for notifications
that providers of general-purpose AI models with systemic risk,
to submit to the AI Office.
27. The AI Act and its implementation package have been accepted
by most major AI companies (with the notable exception of Meta) but have been harshly criticised
by copyright holders for not taking sufficiently into account the rights
and interests of the creative sector.
28. On 30 July 2025, a broad coalition of rightsholders active
across the EU’s cultural and creative sectors released a joint
statement regarding the AI Act implementation measures adopted
by the European Commission. According to this joint statement, the
final outcomes failed to address the core concerns raised by their
sectors, being a “missed opportunity to provide meaningful protection
of intellectual property rights in the context of GenAI” which did
not deliver on the promise of the EU AI Act itself. The signatories
called on the European Commission to revisit the implementation
package and enforce the obligation to disclose a sufficiently detailed
summary of training data (Article 53(1)(d) AI Act) in a meaningful
way, and the European Parliament and member States, as co-legislators,
to challenge the “unsatisfactory process” of this exercise, “which
will further weaken the situation of the creative and cultural sectors
across Europe and do nothing to tackle ongoing violations of EU
laws”.
29. Regarding the Template for general-purpose AI model providers
to summarise their training content, News Media Europe (NME) criticised it for being “alarmingly superficial”, lacking
“the specificity and granularity necessary for rightsholders to
verify whether their copyright-protected content has been exploited
— let alone to enforce their rights effectively”. NME called on
the European Commission to urgently revise the template and establish
an enforcement mechanism including:
- mandatory disclosure of all scraped domains, not just a curated sample;
- itemised identification of licensed versus unlicensed datasets;
- a binding “upon request” mechanism with deadlines and penalties;
- clear consequences for providers who fail to comply in good faith.
4. Current legislative proposals
4.1. European Union
30. In light of the criticism surrounding
EU legislation on these matters, a number of legislative proposals have
emerged.
31. A report commissioned by the European Parliament’s Policy Department
for Justice, Civil Liberties and Institutional Affairs at the request
of the Committee on Legal Affairs already called for “clear rules
on input/output distinctions, harmonised opt-out mechanisms, transparency
obligations, and equitable licensing models”.
32. Moreover, MEP Axel Voss, concerned that fundamental rights
such as copyright, but also personal rights and protection against
discrimination are coming under increasing pressure and can hardly
be enforced effectively anymore, presented an own-initiative
report in the European Parliament that sets out practical and fair
solutions to the tensions between AI development and copyright.
33. MEP Voss proposes
a long-term solution in the form of a copyright framework regulation similar
to the General
Data Protection Regulation (GDPR). His report supports the following measures:
- An urgent and thorough assessment of the EU copyright acquis as regards the legal uncertainty and competitive effects associated with the use of creative works for the training of genAI systems, as well as the dissemination of AI-generated content that may substitute human-created expression. Such assessment must aim to uphold a framework in which fair remuneration mechanisms enable European artistic and creative production to thrive in the context of AI-driven global transformation.
- A transitional remuneration obligation on providers of general-purpose AI models immediately imposed until the reforms envisaged in Mr Voss’s report are enacted, given that content created by genAI systems trained with copyrighted content may result in the provision of products and services that directly compete with those of the rights holders.
- A clarification of the TDM exception under Article 4 CDSM as regards the main flaws and ambiguities detected thus far in its application, especially as concerns the establishment of a clear machine-readable standard for the opt-out and the concept of “lawful access”.
- A legal framework for GenAI compatible with the three-step test of Article 5(5) of the InfoSoc Directive. This framework should be created in one of the following ways:
- through the introduction of a dedicated exception to the exclusive rights to reproduction and extraction, or
- by expanding the scope of the provision for TDM under Article 4 of the CDSM Directive to cover the training of GenAI.
- In both cases, rightsholders should have the right to opt out through a standardised, machine-readable mechanism.
- The European Union Intellectual Property Office (EUIPO) should be responsible for setting up and managing a central register of opt-outs and for mediating licensing processes. Both opt-out declarations and licence offers should be recorded in machine-readable form in the same register.
- Providers and deployers of general-purpose AI models and systems should provide full and actionable transparency and source documentation with regard to the use of copyright-protected works for any purpose, including for inferencing, retrieval-augmented generation, or finetuning, taking into due account the need to protect trade secrets and confidential business information.
- The establishment of an irrebuttable presumption that copyrighted works have been used for the training of GenAI where the statutory transparency obligations set out in this resolution have not been fully complied with.
- AI-generated content should remain ineligible for copyright protection, and the public domain status of such works be clearly determined.
- Finally, Mr Voss calls on the European Commission to explore measures to counter the infringement of the rights of reproduction, of making available to the public and of communication to the public through the production of GenAI outputs.
34. The Voss report was widely commented and, to some extent,
criticised. The Society of Audiovisual Authors
(SAA) welcomed Mr Voss’s call
for an immediate remuneration obligation
on providers of AI models and systems and for full, actionable transparency
and source documentation by providers and deployers of models and
systems in respect of their use of protected works. However, its
proposals regarding the TDM exception were considered disappointing
because they only consisted in clarifying that it applies to GenAI
and follow the opt-out logic. According to the SAA, opt-out regimes
do not trigger licensing and remuneration, so a central registry
would only be a waste of time and money. A joint statement
by EWC, EFJ and CEATL
also rejected
the expanding of the TDM exception onto GenAI use, or any addition
of a new “AI exemption”. The majority of writers, journalists and
translators remained strictly opposed to the use of their work and
works for the development of GenAI, as they did not want to support
a system developed to replace them and harm freedom of speech and
expression.
35. On 28 January 2026, the Committee on Legal Affairs of the
European Parliament adopted Voss’s report.
The
text as adopted contains the following proposals:
- Remuneration for use of protected work: EU copyright law must apply to all genAI systems available on the EU market, regardless where the training takes place. MEPs want full transparency about its use, and failure to comply with transparency requirements could be tantamount to infringement of copyright, for which AI providers could bear legal consequences. MEPs also demand fair remuneration for the use of copyrighted content by AI, and call on the European Commission to examine whether such remuneration could also apply to past use, while rejecting the idea of a global licence allowing providers to train their genAI systems in exchange for a flat-rate payment;
- Protecting the news media sector and individual rights: MEPs call on the European Commission and member States to protect media pluralism. The news media sector must have full control over the use of its content for training AI systems, including the possibility to refuse such use. MEPs also urge the European Commission to ensure adequate remuneration for this use. Content fully generated by AI should not be protected by copyright. MEPs call for measures to protect individuals against the dissemination of manipulated and AI-generated content and for an obligation on digital service providers to act against such illegal use;
- Possibility for rightsholders to prevent their work from being used by AI: MEPs request new rules to address the licensing of copyrighted material for use by GenAI. They call on the European Commission to facilitate the establishment of voluntary collective licensing agreements per sector accessible to all, and ask the European Commission to explore tools allowing rightsholders to prevent their work from being used by general-purpose AI systems.
4.2. France
36. In December 2025, some French
senators introduced a draft bill relating to the introduction of
a presumption of exploitation of cultural content by providers of
AI.
This bill would introduce an Article
L. 331-4-1 to the French
Code of Intellectual Property, which would read as follows: “Unless proven otherwise,
the object
protected
by copyright or a related right, within the meaning of this code,
shall be presumed to have been exploited by the artificial intelligence
system, where evidence relating to the development or deployment of
that system or to the result generated by it makes such exploitation
likely.”
4.3. United Kingdom
37. In December 2024, a consultation
on the UK’s legal framework for copyright which explores solutions supporting both the creative
industries and the AI sector was launched by the UK Government,
attracting 11 500 responses. According to the consultation document,
the current legal framework does not meet the needs of UK’s creative
industries or AI sectors due to its lack of clarity, which leads
AI developers to train their models in jurisdictions with clearer
or more permissive rules, disadvantaging UK-based SMEs who cannot
train overseas. The consultation proposed an approach that aimed
to:
- enhance rightsholders’ control of their material and their ability to be remunerated for its use;
- support wide access to high-quality material to drive development of leading AI models in the UK;
- secure greater transparency from AI developers to build trust with creators, creative industries, and consumers.
38. According to the UK government, the introduction of an exception
to copyright law for TDM similar to that introduced by the EU should
enhance the ability of right holders to protect their material and
seek remuneration for its use through increased licensing, while
motivating AI developers to train leading models in the UK. The consultation
sought views on the following issues:
- transparency;
- technical standards;
- contracts and Licensing;
- labelling;
- computer-generated works;
- digital Replicas;
- emerging issues.
39. In July 2025, new expert working groups including representatives
of the creative and AI sectors gathered
in London in the first of a series of regular planned meetings, with the groups made up of key industry figures. These
talks will inform next steps following the conclusion of the government’s
consultation.
5. Case law
40. The jurisprudence of international
and national courts may clarify the current rules and provide food
for thought for the drafting of future ones. The following paragraphs
present current jurisprudence and ongoing lawsuits in both the United
States and the European Union.
5.1. United States
41. Given that the AI revolution
originated in the United States, it is unsurprising that court litigation
has grown exponentially there. While some important cases are still
ongoing, there are already two decisions available that determined
that using copyrighted books to train Gen AI tools did not infringe
the underlying copyrights. In both cases, the courts found that
use of works to train GenAI tools was highly transformative, and
these tools did not make any meaningful amount of the original works
available to the public.
However, the cases are not identical,
and the judges’ analysis differed in key legal issues.
42. Kadrey
v Meta was a clear win for Facebook’s parent company but it
might not be very relevant for the future, as it was made on the
basis of lack of evidence rather than on substantive legal grounds.
In this case, the US District Court for the Northern District of
California stated that no previous case had involved a use that is
both “as transformative and as capable of diluting the market for
the original works as Large Language Model (LLM) training is”. Therefore,
the court could not refer to previous case law in the matter at
hand and had to flexibly apply the fair use factors and considering
Meta’s copying “in light of the purpose of copyright and fair use:
protecting the incentive to create by preventing copiers from creating
works that substitute for the originals in the marketplace.”
43. According to section
107 of Copyright Law of the United States (Title 17), the fair use of a copyrighted work for purposes such
as criticism, comment, news reporting, teaching (including multiple
copies for classroom use), scholarship, or research, is not an infringement
of copyright. In determining whether the use made of a work in any
particular case is a fair use the factors to be considered include:
- the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
- the nature of the copyrighted work;
- the amount and substantiality of the portion used in relation to the copyrighted work as a whole;
- the effect of the use upon the potential market for or value of the copyrighted work.
44. As regards the fourth factor, “the effect of the use upon
the potential market for or value of the copyrighted work”, the
court stated that “the plaintiffs [had] presented no meaningful
evidence on market dilution at all” and therefore Meta was entitled
to summary judgment on its fair use defence.
45. In Bartz
v Anthropic, the defendant (Anthropic, an AI company) downloaded
for free millions of copyrighted books in digital form from pirate
sites on the internet. The firm also purchased copyrighted books tore
off the bindings, scanned the text, and stored them in digitised,
searchable files. From this central library, Anthropic selected
various sets and subsets of digitised books to train various large
language models under development to power its AI services. Some
of these books were written by the plaintiffs, who sued for copyright infringement.
This case was closed with a financial settlement
after the District Court for the
Northern District of California issued an order on 23 June 2025,
which granted summary judgment for Anthropic that the training use
of copyrighted content and the print-to-digital format change was
a fair use, but it denied summary judgment for Anthropic that the
pirated library copies must be treated as training copies.
46. Of particular interest will be the lawsuit initiated by Walt
Disney Co. and Comcast’s Universal Pictures (joined later by Warner
Bros. Discovery) against GenAI company Midjourney. The Big Entertainment companies
are suing Midjourney as its GenAI tools allow users to create image-based
works based on their intellectual property. As stated in the complaint,
“if a Midjourney subscriber submits a simple text prompt requesting
an image of the character Darth Vader in a particular setting or
doing a particular action, Midjourney obliges by generating and
displaying a high quality, downloadable image featuring Disney’s
copyrighted Darth Vader character”. 
47. Another development to follow will be the consequences of
the cease-and-desist letter sent by Disney to Google, accusing the
tech giant of copyright infringement on a “massive scale”. Disney
claims that Google has used AI models and services to commercially
distribute unauthorised images and videos. According to the letter,
“many of the infringing images generated by Google’s AI services
are branded with Google’s Gemini logo, falsely implying that Google’s
exploitation of Disney’s intellectual property is authorized and
endorsed by Disney.” It is interesting to note that this move by
Disney coincides with the signing of a USD 1 billion, three-year
deal with OpenAI. This deal will see the company's characters integrated
into the Sora AI video generator. 
5.2. European Union
48. Copyright is regulated differently
in the US and in the EU. In a nutshell, the US doctrine of fair
use (as explained above) provides general principles that require
more of a case-by-case analysis whereas the European authors’ rights
tradition of exceptions and limitations is more concrete and precise
but, in some people’s view,
less flexible and adaptable to change.
49. In the EU, there are two distinct jurisdictional levels to
take into consideration. First, national courts are sovereign to
apply the copyright law of the land, which is to a great extent
the product of the implementation of the EU acquis into national
law. Then, according to Article 267 of the Treaty on the Functioning
of the European Union (TFEU), a national court or tribunal may request
the Court of Justice of the European Union (CJEU) of the European
Union to give a preliminary ruling in a concrete case concerning
the interpretation of the Treaties or the validity and interpretation
of acts of the institutions, bodies, offices or agencies of the
Union.
50. As such, the CJEU will be dealing for the first time with
AI and copyright issues in referred Case
C-250/25, Like Company. The case referred by the Budapest Környéki Törvényszék
(Hungary) on 3 April 2025 concerns the application of EU copyright
rules to the display, in the responses of an AI chatbot, of a text
partially identical to the content of web pages of press publishers,
where the text is protected under the CDSM provisions on protection
of press publications concerning online uses (Article 15 CDSM).
On top of this issue, the court will have to answer whether the
process of training an AI chatbot constitutes an instance of reproduction,
and whether such reproduction falls within the TDM exception. Finally,
the court will have to rule on the legality of a situation where
a user gives an AI chatbot an instruction which matches the text
contained in a press publication, or which refers to that text,
and the chatbot then generates its response based on the instruction
given by the user.
51. While waiting for this important CJEU ruling, in Germany,
the Regional Court (Landgericht) of Hamburg has taken the first
decision in the EU regarding the applicability of the TDM exception
to the training of AI tools. In the LAION case,
the court decided inter alia that the reproduction
of works for the purpose of creating URL lists that can be used
for AI training falls under the scope of TDM for the purposes of
scientific research (Section 44b (1) of the German Copyright
Act) where the works are examined for correlations during
the data preprocessing stage. Also, the court decided that the term
“scientific research” as defined in Section 60d of the German Copyright
Act also includes preparatory work aimed at obtaining knowledge
at a later date.
This judgment, which has been appealed
by the complainant,
is of particular importance because
it rejects the notion that the TDM exception should not apply to
the training of GenAI tools because, when adopting the CDSM Directive
in 2019, the EU legislator “simply did not yet have the AI problem
on its radar”. The Hamburg court explained that, since 2019, technical
developments in the field of artificial intelligence have been less concerned
with the type and scope of data mining (which is the subject of
the dispute) for the purpose of obtaining training data, and more
with the performance of artificial neural networks trained with
the data. Moreover, according to the Hamburg court, the EU legislator
unambiguously stated in the EU AI Act that the creation of data
sets intended for training artificial neural networks also falls
under the restrictions of Article 4 CDSM. This is because, according
to Article 53(1)(c) of the AI Act, providers of general-purpose
AI models must put in place a policy to comply with EU law on copyright
and related rights, and in particular to identify and comply with,
including through state-of-the-art technologies, a reservation of
rights expressed pursuant to Article 4(3) CDSM.
52. Another German case is particularly relevant. In GEMA v. OpenAI, the Regional Court
(Landgericht) of Munich upheld the claims for injunctive relief,
information and damages asserted by the collecting society GEMA
against two companies of the Open AI group.
According to the court, both the
memorisation in the language models and the reproduction of song
lyrics in the chatbot's outputs constitute infringements of copyright
exploitation rights. These are not covered by any copyright exceptions
(including the TDM exception).
5.3. United Kingdom
53. In the recent case of Getty Images v. Stability AI, the
UK's High Court delivered a ruling
dismissing Getty Images' secondary
copyright infringement claim against Stability Diffusion. Among
other issues, the court ruled that an AI model such as Stable Diffusion
which does not store or reproduce any Copyright Works (and has never
done so) is not an “infringing copy” such that there is no infringement
under sections 22 and 23 of the Copyright,
Designs and Patents Act 1988 (CDPA). 
6. A competition law perspective
54. Beyond copyright, the use of
third-party content for AI training purposes can have competition
law implications e.g. if the AI developer distorts competition by
imposing unfair terms and conditions on publishers and content creators,
or by granting itself privileged access to such content, thereby
placing developers of rival AI models at a disadvantage.
55. On 9 December 2025, the European Commission announced the
opening of a formal antitrust investigation
to assess whether Google had breached
EU competition rules by using the content of web publishers, as
well as content uploaded on the online video-sharing platform YouTube,
for AI purposes. 
56. If proven, the following practices under investigation may
breach EU competition rules that prohibit the abuse of a dominant
position (Article 102 of the TFEU) and Article 54 of the European
Economic Area (EEA) Agreement:
- The use of content of web publishers to provide genAI-powered services (“AI Overviews” and “AI Mode”) on its search results pages without appropriate compensation to publishers and without offering them the possibility to refuse such use of their content, and without the possibility for publishers to refuse without losing access to Google Search;
- The use of video and other content uploaded on YouTube to train Google's genAI models without appropriate compensation to creators and without offering them the possibility to refuse such use of their content. Google does not remunerate YouTube content creators for their content, nor does allow them to upload their content on YouTube without allowing Google to use such data. At the same time, rival developers of AI models are barred by YouTube policies from using YouTube content to train their own AI models.
57. In the wake of this antitrust investigation, the European
Publishers Council submitted a formal complaint to the European
Commission in February 2026,
alleging that Google LLC and Alphabet
Inc. were abusing their dominant position in general search, in
breach of Article 102 TFEU, through the deployment of AI Overviews
and AI Mode in Google Search.
58. The complaint aims at demonstrating that “Google’s integration
of generative AI into its dominant search interface represents a
structural shift from a referral-based search service to an answer
engine that systematically substitutes publishers’ original journalistic
content”. According to the complaint, Google would “extract and
monetise publishers’ content without effective control by publishers,
and without fair remuneration, while simultaneously displacing traffic,
audiences, and revenues that are essential to the sustainability
of professional journalism.”
59. According to the European Publishers Council, the core problem
in this case is competition. In their view, Google is an unavoidable
trading partner because of its dominance in general search, and
it uses that dominant position to impose conditions under which
publishers must accept the use of their content for AI purposes
in order to remain visible. While copyright law is central to the
facts of the case, it cannot address this “coercive imbalance” or
“restore competitive conditions” alone. This explains the recourse
to a complaint based on EU competition law.
60. In this regard, the complaint highlights an important issue
for publishers: visibility. According to them, “AI Overviews significantly
reduce click-throughs by answering queries directly on the top of
the search results page”. Moreover, the technical tools cited by
Google for opting out of their AI service “either do not prevent
AI use or require publishers to accept severe losses in search visibility”.
61. Regarding solutions, the European Publishers Council invites
the European Commission to consider remedies capable of restoring
competition, including meaningful publisher control over AI use,
transparency on how content is used and its impact, and a fair licensing
and remuneration framework that reflects the scale and value of
publishers’ content.
7. Problems and possible solutions
7.1. Training AI with copyrighted content
62. In the LAION case, the Hamburg
court highlights the main problem surrounding the training of GenAI tools
with copyrighted content. While it could be argued that the TDM
exception was included in the CDSM in 2019 because the EU legislator
“simply did not yet have the AI problem on its radar”, this does
not mean that it should not apply to the training of GenAI tools.
But, as I mentioned before, the TDM exception is not a good solution
for this problem, since it deprives copyright holders from both
control and remuneration over their works. As such, this provision
should be amended in a way that balances their rights and interests
so that innovation is not achieved at the expense of creators.
63. During the Committee on Culture, Science, Education and Media’s
meetings of 1 December 2025 and 27 January 2026, we had the opportunity
of hearing the views of Ms Karen Rønde, CEO, Danish Press Publications'
Collective Management Organisation (DPCMO), and Ms Eleonora Rosati,
Professor of Intellectual Property Law, Stockholm University (Sweden).
64. The exchanges of views showed that any legislation that is
adopted having a specific technological reality in mind might run
the risk of rapid obsolescence, and warned against legislation that
is adopted out of fear that existing laws are not enough. It was
possible to apply by analogy, teleologically, dynamically existing legislation.
The other point made was that of fragmentation. The protection afforded
to one's persona and personal attributes was very different from
country to country, and there had been calls for greater harmonisation.
The Council of Europe might consider whether to have a greater level
playing field in this area than what was the case for the time being.
Regarding gaps in existing legislation, this was a global discussion where
the winner may take it, and early adopters might become leaders
in the global regulation of AI. Regarding exceptions to copyright,
it was stressed that exceptions were not exclusions, but they existed
within the copyright system, and they were framed within very specific
requirements. The training of AI models required access to billions
of data, and it was virtually impossible to clear the rights for
everything. Therefore, there had been proposals to reduce legal
risk in different ways and through different mechanisms. Permission should
be secured even though it might be burdensome in some cases. A licensing
scheme was not something impossible to achieve, and exceptions to
copyright were not the only way through which AI could develop.
65. There is a need to go back to the foundational copyright principle
whereby content creators owned and controlled their work. There
is a need to focus much more on transparency and accountability.
The reluctance of AI companies to disclose training data has significant
legal implications for rightsholders. In the absence of such disclosure,
rightsholders are unable to provide substantiation in a court of
law for the unauthorised utilisation of their content.
66. In order to solve this problem, a recommendation would be
to introduce a legal presumption rule, which would shift the burden
of proof to AI companies. According to this rule, it would be presumed
that commercial AI systems have been trained on copyright-protected
material in cases where the transparency requirement is not met.
As mentioned above, there is a draft bill currently being discussed
in France that contains a similar proposal.
67. Furthermore, tech companies should not be allowed to invoke
any copyright exception such as the TDM exceptions introduced by
the EU’s Copyright Directive (see above).
68. Another recommendation stemming from the expert’s input was
to introduce fair remuneration rules based on an independent valuation
because creators cannot get access to data used by AI systems. Collective licensing
would be important in this regard because it would support all publishers,
not just the biggest ones but also the local, regional and small
startups.
69. Furthermore, it would be imperative to identify a solution
for instances where online services impede news services, as this
practice would exempt them from the obligation to pay fair remuneration
and to share data. In this regard, possible tools would be must-carry/must-offer
obligations, cultural contributions and other kinds of incentives.
70. Finally, the enforcement tools available in the current legal
toolbox are not efficient, so there was a suggestion that a compulsory
final offer arbitration (FOA) model as a kind of fast-track litigation
process. This Danish FOA model is based on the Australian FOA model
(which had the effect that Meta and Google came to the negotiation
table in Australia) and functions as follows: a request for binding
arbitration can be addressed to the Minister of Culture by one of
the negotiating parties when one of the parties has broken off negotiations, refused
a request for negotiations, or when negotiations do not seem likely
to lead to a result. A demand for the initiation of binding arbitration
does not require agreement between the parties but they are obliged
to participate. The Minister of Culture appoints the chairperson
and two expert co-arbitrators if the case is of significant economic
or societal importance. The arbitrator shall review the proposals
submitted by the parties and shall choose between them in their
entirety. The arbitrator may not change or propose other solutions. When
selecting proposals, the arbitrator shall emphasise, among other
things, the value of the content for the platform, the cost of producing
the content, general societal considerations and competition law
considerations. The decision may be compulsorily enforced.
7.2. AI as rightsholder
71. Despite the common use of the
term “copyright”, in Europe we rather speak of “author's rights”.
Indeed, the concept of “authorship” being attached to a physical
person is fundamental to the regulation of rights relating to creative
works as we conceive it. Nevertheless, there is ongoing legal debate
about the copyrightability of works created using AI tools, and
about who would hold the rights. Whereas it seems obvious that an
AI tool cannot be a holder of rights, a case-by-case analysis might
be required to determine whether a work created with the intervention
of an AI tool can have a physical person as an author. 
72. In what has been hailed as a first ruling
from a European court on the copyrightability
of content created by an GenAI system, the Municipal Court of Prague
stated
that “artificial intelligence by
itself cannot be the author (…) when only a natural person can be
the author, which artificial intelligence certainly is not.” Moreover,
in the case at hand, the image did not even constitute a work of
authorship according to Section 2 of the Czech
Copyright Act, as it was not “a unique result of the creative activity
of a physical person – the author. The plaintiff himself did not
personally create the work, it was created with the help of artificial
intelligence”.
7.3. The case of deepfakes
73. Deepfakes are, as defined by
the EU’s AI Act, AI-generated or manipulated images, audio or video content
that “appreciably resembles existing persons, objects, places, entities
or events and would falsely appear to a person to be authentic or
truthful”. Deepfakes are not inherently harmful and can be used
for legal purposes, such as parody. However, if misused, they can
affect a number of fundamental rights, notably freedom of expression
and information, since deepfakes can be used for disinformation
purposes and to manipulate public opinion in electoral processes.
Furthermore, deepfakes can violate personality rights by misusing
a person’s image (e.g. in pornography) and voice. This violation
of personality rights can be particularly harmful when it comes
to the image of minors, as highlighted recently by the French
and British
investigation into GrokAI’s production
of sexualised deepfakes of children.
74. One potential tool to counteract these perils is AI literacy,
which provides users with the skills to identify AI-generated content.
However, other measures should be implemented, and the role and
responsibilities of internet operators may require clearer identification.
75. Article 8 of the Council of Europe Framework Convention on
Artificial Intelligence and Human Rights, Democracy and the Rule
of Law, prescribes measures to ensure that adequate transparency
and oversight requirements tailored to the specific contexts and
risks are in place in respect of activities within the lifecycle of
artificial intelligence systems, including with regard to the identification
of content generated by artificial intelligence systems.
76. Article 50(4) of the EU’s AI Act goes more in detail and contains
transparency obligations for AI systems regarding AI systems, including
general-purpose AI systems, generating synthetic audio, image, video
or text content. These systems must ensure that their outputs are
marked in a machine-readable format and detectable as artificially
generated or manipulated. In the case of deepfakes, the AI systems
that generate or manipulate such content must disclose them as such.
77. Despite the existence of these rules, the phenomenon of deepfakes
continues to proliferate online, with some instances even evading
recognition by the very software used in their creation. 
78. Recently, it has been suggested that a solution to the deep
fake problem would be to give individuals a “copyright” (actually,
a neighbouring right) over their physical likeness and voice.
79. A broad majority in the Danish Parliament recently reached
a political agreement
on a legislative proposal
to amend the Copyright Act. This
amendment will make it illegal to share deepfakes and other digital imitations
of personal characteristics. Performing artists will also receive
better protection, so that in future it will be illegal to share
realistic digital imitations of their performances. 
80. This proposal, if adopted, will introduce two new forms of
protection into the Danish Copyright Act:
- a general protection against the unauthorised making available of realistic digitally generated imitations of personal characteristics, cf. Section 1(11) of the bill (Section 73a of the Copyright Act);
- a protection of performers and artists against the making available to the public of realistic digitally generated imitations of their performances or artistic achievements without consent, cf. section 1(9) of the bill (section 65a of the Copyright Act).
81. The question is whether the many issues surrounding deepfakes
should be regulated via copyright legislation. It could be argued
that deepfakes should be regulated
rather through privacy law and personality rights as the main concerns
for individuals are precisely their right to privacy and their personal
reputation. They can also be regulated by media law or election
law if preserving trust in the media or safeguarding democracy are
the main aims. And there are many legal remedies already existing,
including image rights, data protection, tort law, unfair competition
law, rules on unlawful advertising, and criminal law (fraud, identity
theft and “revenge porn”).
82. Italy, for instance, has recently adopted Law
132/2025, which came into force on 10 October 2025. This law
amends the Italian
Criminal Code by introducing a new Article 612-quarter, which makes
it a criminal offence that can result in a prison sentence of between
one and five years for “anyone that causes unjust damage to a person
by transferring, publishing or otherwise disseminating, without
their consent, images, videos or voices that have been falsified
or altered using artificial intelligence systems and are likely
to mislead as to their authenticity”. The offence is punishable
upon complaint by the injured party, but proceedings shall be brought ex officio if the offence is connected
with another offence for which proceedings must be brought ex officio or if it is committed
against a person who is incapable, due to age or infirmity, or against
a public authority because of the functions exercised.
8. Conclusions
83. The advent of the AI era has
brought with it a particularly challenging set of problems. In a
sense, it could be argued that AI poses an existential threat to
Europe's creative sector and European culture as a whole. The current
legislative solutions will not solve this problem. We need solutions
that balance the competing rights and interests so that innovation
does not come at the expense of creators and freedom of expression
does not breach third parties’ personal rights.
84. Drawing upon these conclusions, I hereby propose a set of
concrete measures in the draft resolution.
