Imagine a team of skilled programmers who will easily deliver a working system on time and on budget. Customer satisfied, contract fulfilled. And then it turns out that much of the code was not invented by them, but by an AI-based tool – and no one has asked who owns the code, whether it carries third-party licences, or whether it will pass the security audit required by the regulator. This situation is now happening in dozens of projects every week. While developers are making extensive use of smart assistants to generate code, contracts for work do not reflect this at all in most cases.
The silent revolution in code
For projects where programmers use tools to automatically add and generate code, AI now creates a significant part of the resulting software. For routine work – repetitive pieces of code, standard tests, interconnecting “glue” between systems – this is often the larger part. Yet the vast majority of contracts for work or master agreements for software development ignore this reality. The text of the contract describes the work as the result of the developer’s creative work, refers to the copyright of the supplier, and guarantees the legal integrity of the code without anyone examining what exactly was written by a human and what was generated by the tool.
So who is responsible when a security vulnerability is discovered in such a project that originates from an automatically generated piece of the code? Or when it turns out that a piece of the code contains passages from open source software distributed under a licence that requires the full source code to be made public under certain conditions? These questions often remain unanswered by the contract because these situations did not exist a few years or even months ago.
Regulation has responded. Contracts have not.
The new European regulation on artificial intelligence classifies AI systems according to their level of risk and imposes different obligations on those who develop them and those who deploy them in their processes. In practice, this means that a customer who deploys software with AI elements (whether it is a client credit scoring system, a recruitment tool, or another solution with automated decision-making) bears specific regulatory responsibilities as an operator.
In addition, there are cyber security requirements (the Cyber Security Act, European Network and Information Security Directive) and, in the financial sector, DORA rules for relations with ICT service providers. All of these layers meet at the point where the code generated with the assistance of AI touches the customer’s production system. The topic ceases to be a mere business risk and becomes a regulatory issue with very specific consequences in the event of an inspection or a security incident. The issue of “something might happen” becomes one of “when will something happen and who will pay for it”.
Code without an author and with double risk
Copyright law protects a computer program as a work if it is the result of its creator’s own intellectual efforts. Purely machine-generated code – i.e. a situation where the programmer enters an instruction (prompt) but does not further develop the output in any creative way or integrate it in architecture – usually does not meet this condition. The Czech case law on the question of authorship of AI-generated outputs is still taking shape. The trend, however, is clear: without sufficient human creative contribution, copyright law will not apply.
What does this mean for the classic contract model? The supplier promises to transfer the copyright in the work – but in what exactly? Some of the code may not fall under copyright law at all. The customer then does not formally acquire the rights he believes he is buying. From a legal perspective, such code is often “abandoned” material that another person can take and use without permission.
At the same time, the code generation tools have been trained on a huge amount of publicly available code, including projects under so-called copyleft licences – licences that require derivative code to be redistributed as open source in certain situations. If the tool has partially reproduced or relied on such code, the customer may unknowingly be running software infringing third parties’ rights. Often, the supplier does not even know what exactly the code generator “used”, they only see the result. Nowadays, only a small number of AI tool vendors offer genuine contractual indemnification for potential IP-related disputes – most of them logically pass the risk on to the user through exclusions and limitations of liability.
The result is a double risk. On the one hand, code that may not belong to anyone and can be copied by anyone. On the other hand, code that may belong to someone else and its operation infringes someone else’s rights. In addition, both options may exist simultaneously and are usually not mentioned in the contract.
A triangle where things falter
From our point of view, the biggest blind spot in today’s practice is the relationship between the developer, the supplier, and the customer – or more precisely, the three layers of contractual relationships that are not aligned with each other as they often do not actually “look over the others’ shoulder”.
The contract between the supplier and the customer is usually silent on the use of AI tools. The supplier guarantees functional and legally flawless code, but does not state what tools he uses in development or what rules for their use he follows. The customer then does not even know what exactly he is buying.
The internal relationship between the supplier and his programmers – whether employees or subcontractors – typically does not cover this area at all. Developers use code generators at their own discretion, without mandatory two-step code review, without logging inputs and outputs, without systematic licence checking. In addition, the terms of the code generation tool itself contain limitations of liability and disclaimers that the supplier either did not read, or ignored, or most certainly did not align with what the customer is promised in the main contract.
This raises a set of questions to which there are no simple answers. Who owns the code created with the help of artificial intelligence and who can develop it further? Who is liable for third party licence violations and security errors? Is the supplier obliged to admit to the customer that part of the code was generated in this way? How to realistically set up a warranty of legal flawlessness when even the provider of the code generation tool does not provide full protection?
What to consider in your next contract
Waiting for settled case law or for the market itself to create a uniform standard is a convenient but probably costly strategy. Instead of waiting, we consider it preferable to at least contemplate a number of things, ideally today:
1. Open declaration of tools used
The contract should explicitly oblige the supplier to disclose which AI-based tools it uses in development, including a basic description of their licensing terms and the extent of IP protection. At least the customer then knows what risks they are working with and can respond to them in their own internal policy for using AI.
2. Mandatory processes – security and licensing as part of development
Alongside the standard development management, there should be a process focused on security (often referred to as DevSecOps) and licensing (sometimes the term DevLicOps is used – i.e. systematic licence checking for all parts of the code, including that generated by a tool). The contract should describe these processes and make them a commitment (logging of inputs and outputs, scanning of code for potential licence conflicts, mandatory review of critical parts of the system). Without an audit trail, any contractual assurance is difficult to defend in the event of a dispute.
3. Clear definition of rights in outputs
The contract should explicitly stipulate who holds the rights in the resulting code, regardless of how it was written (whether by a human or a tool), and who indemnifies whom against third-party infringement claims. The method of creation of the code must not be an excuse to narrow the scope of the originally agreed licence.
4. Unification of the three layers of relationships
The terms of the AI tool, the contract between the supplier and the customer, and the customer’s internal policy on the use of AI must be consistent with each other. The supplier must not promise the customer more rights than the tool provider itself allows the supplier to transfer; the customer must check that its internal rules correspond to what it requires from the supplier in the contract.
5. Guaranteed processes instead of the illusion of a perfect result
An absolute guarantee of error-free and legally flawless code is unrealistic in an era of mass use of code generation tools. A more sensible approach is to guarantee standards and procedures (security testing, regular audits, transparent management of the use of AI in development). Accountability for adherence to processes is enforceable; accountability for the perfection of the tool’s output is rather not.
If this topic concerns you – whether as a customer drafting new IT contracts, as a supplier looking for a reasonable definition of your own liability, or as an in‑house lawyer whose firm is just setting the rules for using AI in development – we will be happy to help. We have experience in combining legal, regulatory and technical perspectives: from reviewing contractual documentation and setting up AI governance to aligning internal policies with European regulatory, cybersecurity and sectoral requirements. The cheapest litigation is the one that never happens, and the first step towards this is to look at the actual contracts and processes before they become evidence in court proceedings.







