Decorative page background

AI from a lawyer’s perspective: What issues are we currently tackling?

AI from a lawyer’s perspective: What issues are we currently tackling?

Artificial intelligence tools and systems are now in the focus of almost every major company as part of their innovation agenda. Whether a company is developing these technologies, implementing them for their customers, preparing a strategy for their adoption, or simply planning to use them, it is always prudent to discuss any such intentions with legal experts. These experts, faced with the nearly endless list of opportunities that AI offers, are tackling a variety of legal issues. Let’s take a look at the most common ones.

Internal guidelines

If your organisation allows employees to use AI tools, it’s common to draft or review internal guidelines for their use. Unless access to certain websites is strictly blocked on work devices, it’s highly likely that your employees are already using AI tools as part of their work. These tools can include generative AI assistants (like ChatGPT, Claude, or Copilot), translation or text editing tools (such as DeepL, Grammarly, or QuillBot), or graphic design tools (like Midjourney, Canva, or DALL-E).

Particularly abroad, an internal AI Policy is often one of the first documents organisations create as part of their innovation agenda. The goal is to use AI sustainably and in a controlled manner. Organisations aim to protect their trade secrets and personal data under their control while providing employees not only access to these tools but also sufficient education and related training. In cases where employees violate these obligations, the AI Policy serves as a crucial supporting document.

Supplier arrangements

Larger companies often have relatively strict rules for sharing information and documents with their suppliers, significantly limiting the ways in which suppliers can use the documents and information provided to them.

Especially in Western countries, it is becoming increasingly common for such companies to set predictable rules for the use of AI by their suppliers (typically in terms and conditions or contract annexes). These rules are based on the expectation that your suppliers are using or planning to use AI tools and systems in the same way as you do. We have already encountered requests to prepare and subsequently incorporate such provisions into a legal services agreement with a major client. Supplier agreements regarding the use of AI are also typically a standard part of pre-transaction legal due diligence.

Adjusting GDPR documentation

The use of AI tools naturally involves the protection of personal data. In this context, every organisation should consider the following questions:

  • Are we properly informing data subjects about AI processing (our customers, users, or employees)?
  • Do we have an obligation to conduct a data protection impact assessment or appoint a data protection officer?
  • On what legal grounds are we processing personal data, and are we ensuring that data is not transferred to third countries?
  • Are AI systems and tools using personal data held by us for their “learning”? If they are, are we able to prohibit this?
  • Do we have control over whether, how long, and in what form data is stored in AI systems?

When introducing a new AI tool or system, it’s crucial to assess its impact on existing processing activities. This includes determining what personal data will be processed by the AI, for how long, and to whom it will be disclosed. These considerations must then be reflected in the current “GDPR” documentation, particularly in records of processing activities and data protection policies.

Simultaneously, it is usually necessary to enter into a data processing agreement with the tool provider. Potential transfer of data to third countries must be adequately addressed, for example, through standard contractual clauses. For more extensive processing, it is also advisable to evaluate the need for a Data Protection Impact Assessment (DPIA), which is generally recommended when using AI, and to consider appointing a Data Protection Officer.

Impact of the AI Act

The vast majority of organisations have taken note of the adoption of the AI Act,[1] which aims to regulate the use of artificial intelligence, particularly in areas with high risks to human rights, safety, and fundamental freedoms.

Organisations using AI tools and systems are increasingly (and rightly) asking whether their systems might be classified as high-risk, what obligations the new regulation imposes on them, and how to implement and manage these processes in practice. Answering these questions typically requires a combination of legal and technical expertise, and integrating these procedures into the daily operations of an organisation can often be time-consuming. Therefore, it is advisable to address compliance with this regulation well in advance

Training data

One of the key legal aspects of using AI is handling training data. Organisations working with data from various sources often invest significant funds in purchasing or licensing this data. The question of whether this data can be used to train their own AI models is becoming increasingly pressing.

Even when companies transparently disclose that they use specific sources for developing their AI tools and emphasise that these are assisted solutions, caution is necessary. The legal framework varies depending on the type of data and its use. Therefore, it is crucial for organisations to consider (both contractual and statutory) legal restrictions and licensing conditions associated with data usage, especially for further use in training of AI models. Failure to comply with these rules could have serious legal consequences, as evidenced by the growing number of lawsuits from rights holders against AI tool providers, particularly in the US.

Implementation of a new system or solution 

AI is increasingly being used in highly specialised systems, such as analysing data from camera systems or even replacing human labour. While these solutions can be attractive for cost savings and increased efficiency, companies should not underestimate the legal and ethical aspects of their deployment.

It is crucial for organisations to carefully consider the legal framework, particularly contractual obligations and technical measures that minimise the risk of misuse or incorrect application of these technologies. Equally important is ensuring transparent documentation for customers, which dispels doubts about the safety and functionality of the deployed AI solution. Neglecting this area could lead to legal complications with negative impacts on the organisation’s operations and reputation.

It’s essential not to underestimate the importance of public regulation, whether it concerns data protection (for example, facial recognition CCTV systems involve processing biometric data as a special category of personal data), AI regulation (under the AI Act, the vast majority of remote biometric identification systems and AI systems designed for emotion recognition are classified as high-risk), or cybersecurity rules.

Legal liability

As AI is increasingly used in programming and operating machines (like drones), a host of legal questions arise that require careful consideration. One of the most pressing challenges for AI tool developers is liability for damages caused by system errors. Beyond compensation, it’s crucial to carefully set contractual terms with customers, ensuring that systems meet necessary safety standards and are resilient against cyberattacks.

These areas demand clear rules and rigorous processes to minimise legal risks and ensure that the use of AI in technologies complies with legal and technical requirements. Proving the source of an error (whether it’s the developer, manufacturer, user, integrator, or others) can often be quite complicated.

Conclusion

Artificial intelligence, like any innovative technology in recent times, offers organisations numerous opportunities to gain a competitive edge – whether through product innovation, process optimisation, or cost reduction. However, it also presents a number of legal challenges.

If you are using AI tools in your business and are unsure whether you have all the necessary aspects covered, we would be happy to discuss everything with you. With our experience, we can help you find effective solutions to all AI-related issues.

  • [1] – Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139, and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797, and (EU) 2020/1828 (Artificial Intelligence Act).
Related articles