The start of 2025 marks a turning point for the world of artificial intelligence, with significant legal and strategic developments reshaping the landscape. From new regulations and guidelines issued by the European Commission to intense investment battles between global powers, and even potential copyright precedents, the rules of the game are changing —along with the risks and opportunities for developers, deployers, and investors.
What's going on in the EU: New rules and unexpected twists
The withdrawal of the proposal for AI Liability Directive highlights just how difficult it is to reach consensus within the EU. While the original proposal was, in our opinion, overly protective and disconnected from the technological realities of AI, the fact remains: current legal frameworks in the Czech Republic—and likely other Member States—are ill-equipped to handle liability for AI-related damage. This makes it all the more crucial for future regulation of such liability to strike the right balance, ensuring it is both practical and swiftly adopted.
With the Directive stuck in a deadlock, attention is shifting to the General-Purpose AI Code of Practice, which complements the AI Act requirements. While not legally binding, compliance with the Code will establish a presumption of conformity with the obligations outlined in Articles 53 and 55 of the AI Act. The second draft of the Code places a strong focus on copyright issues, introducing requirements to verify training datasets, use only legally sourced data, and recommending the implementations of safeguards against AI model memorization—a phenomenon where AI generates outputs that closely resemble, or even replicate, copyrighted underlying data.
Although still in its second draft, with a final version expected this summer, the Code has already sparked significant controversy—particularly among the Big Tech companies. Doubts are growing over whether major players will even sign it. Meta has criticized the Code as unnecessarily extensive and exceeding the scope of the AI Act, while Google has warned that its current form is too burdensome and could weaken the EU’s global competitiveness.
In addition to the Code, the Commission has published the new Guidelines on AI system definition to help companies determine whether their products fall within the scope of the AI Act, and the Guidelines on prohibited AI practices which define what is prohibited in the AI context in the EU. These guidelines focus in particular on the prohibition of manipulative algorithms, social scoring, and the misuse of real-time biometric data.
Amid these regulatory shifts, companies should assess how the new requirements will impact their AI projects. From our perspective, verifying the origin of training data, establishing internal processes for copyright compliance, and determining whether AI systems fall under the AI Act regulation are now essential steps. In this rapidly evolving landscape, thorough AI due diligence is no longer just a safeguard against legal risks—it is a strategic investment in the future.
Highlights from the world: First major ruling and AI collective agreement
The past few weeks have seen significant copyright rulings that could shape the future of the protection of AI-generated works. In a key development, the U.S. Copyright Office (USCO) issued a stance that AI-generated works can be protected by copyright—provided they contain sufficient human creative input. This principle stemmed from the previous case of the comic book “Zarya of the Dawn”, where the USCO granted copyright only for the work as a whole, not for the individual AI-generated images, which it deemed lacked human originality.
Shortly after, however, the USCO granted copyright for the AI-generated artwork “A Single Piece of American Cheese” by Kent Keirsey, created using the Invoke tool. The decisive factor was that Keirsey demonstrated his human creative input by submitting detailed records of over 35 manual modifications through inpainting, where he deliberately altered, refined, and combined individual parts of the artwork to align with his creative vision. The USCO recognised that his overall manual modifications demonstrated a sufficient degree of human creativity to meet the condition for copyright protection.
This ruling builds on similar cases from China and the Czech Republic, about which we have written previously, where courts have assessed the degree of human involvement in AI-generated works. The Beijing Internet Court granted copyright protection to a user of Stable Diffusion, citing significant human input—including over 150 iterations and manual modifications. In contrast, the Prague Municipal Court denied copyright protection, ruling that the author failed to demonstrate sufficient creative contribution. However, the court indirectly acknowledged that a different ruling might be possible if clear human input were proven. These cases highlight a key trend: the greater the human creative input in creating AI-assisted works, the higher the chances the creators will have to secure legal protection of the result.
Germany has also made a major move in AI-related legal discussions this year, finalizing Europe's first collective agreement on AI use in film production (AI-CBA). Negotiated between the German Producers Alliance, labour unions BFFS and ver.di, the agreement sets rules for creating digital doubles of actors and introduces a compensation model based on “fictional shooting days”—an estimate of the time actors would have spent on set. It also provides producers with flexibility in negotiating rights for future projects, establishing a new standard alongside SAG-AFTRA's agreement for Hollywood talent in balancing creators’ rights with technological advancements.
Simultaneously, a landmark dispute over AI training data has dominated headlines in recent days: Thomson Reuters vs. ROSS Intelligence. A U.S. court ruled in this case that using legal summaries from the Westlaw database to train the AI-powered legal research tool ROSS Intelligence does not qualify as fair use (the U.S. equivalent of Czech statutory licences that allow limited use of copyrighted content without the owner’s explicit consent, such as for educational or critical purposes). Although the summaries were not directly reproduced in ROSS’s “final” product, the court found that the AI training process benefited from Westlaw’s editors’ creative work. De facto, the court described ROSS’s conduct as unfair competition practice as the company leveraged a rival product to enhance its own tool without having the corresponding rights to do so. The court also rejected ROSS’s argument for transformative use (which may apply when a new work gives the original work a different meaning, context, message, or purpose). It ruled that ROSS’s AI search engine served a similar function as Westlaw—providing answers to legal queries—without creating new context or purpose.
In this context, it is important to note that this is only a preliminary ruling by a U.S. federal court, not a final verdict—especially not in relation to generative AI, which is not directly at issue in this case. A more indicative case in this regard may be The New York Times vs. OpenAI. Hence, precedent-setting significance will likely only arise from the rulings of higher courts, most likely the U.S. Supreme Court. That said, even this initial ruling suggests that courts may take a strict stance on the use of copyrighted content for AI training—unless a clearly transformative use can be demonstrated.
For AI developers—despite this having a U.S. context—this case should serve as a strong warning to thoroughly verify their datasets and ensure they have the proper rights to both the data and its content. Verifying the origin and legal basis of training data is crucial for avoiding disputes and third-party claims. This is particularly relevant at a time when the European Union, through the AI Act and the General-Purpose AI Code of Practice, is ramping up regulatory pressure, emphasizing the legality of data sources, and requiring responsible handling of copyrighted content, as highlighted in the previous section.
AI Summit and Investments in AI: Europe vs USA
The recent Paris AI Summit, intended to showcase Europe’s ambitions in artificial intelligence, ended on a mixed note. The most striking development was the refusal of the U.S. and the U.K. to sign the summit’s final declaration, which critics dismissed as overly vague and filled with generic statements about inclusion, diversity, and sustainability.
It was during the summit that the European Commission announced InvestAI, a program aimed at mobilizing €200 billion for AI projects, including the construction of a “gigafactory” for large European language models. European Commission President Ursula von der Leyen likened the initiative to a “CERN for AI,” symbolizing European scientific collaboration. However, the program still lacks a clear strategy on how these investments will translate into real global competitiveness.
While Europe has only just unveiled its investment plans, the United States has already launched its own project. In January, U.S. President Donald Trump announced the Stargate initiative, a $500 billion program designed to accelerate the development of artificial general intelligence (AGI) and cement U.S.A.’s leadership in the AI race. Supported by OpenAI, Oracle, and SoftBank, the initiative prioritizes building large-scale computational infrastructure. At the AI Summit, U.S. Vice President JD Vance warned that excessive regulation could stifle innovation and leave Europe technologically dependent on others.
The contrasting approaches of the U.S. and the EU highlight not just different investment strategies, but also fundamentally different roles of the state in fostering AI ecosystems. While the United States is betting on the strength of the private sector and rapid infrastructure expansion, Europe is relying on coordinated public investments—though questions remain about their efficiency and real impact.
Conclusion
So far, 2025 has proven that the AI landscape is highly turbulent and filled with legal and strategic uncertainties. Ongoing copyright disputes, new regulatory frameworks in the EU, ambitious investment plans, and the first collective agreements in AI suggest that we are on the brink of transformative changes that could shape the industry's future. One thing is certain: this year will be exceptionally dynamic for developers, investors, and legal professionals in the tech world—bringing both new challenges and opportunities.
If your AI projects involve legal complexities—whether in copyright, investment structures, or regulatory strategies—we are here to help. We work with developers, deployers, and investors to ensure their ventures remain legally sound. To stay ahead of evolving regulations and make the most of AI’s opportunities, feel free to reach out to us.