Navigating the legal environment of AI (part 2)

Date

01/09/2025

Temps de lecture

8 min

Share

As artificial intelligence (AI) continues to impact different business sectors and industries, companies are facing a complex landscape of legal challenges. In the second part of this interview with experts Maximiliano MARZETTI and Clare SHIN from IÉSEG we delve into the challenges related to intellectual property, how companies can protect their business, and their legal responsibilities regarding transparency…

(The first part of the interview is also available here)

One of the issues raised by the development of AI relates to intellectual property. Can you provide some examples of IP-related challenges and what companies can do to protect their IP or avoid potential disputes?

Maximiliano MARZETTI – MM: Some AI companies’ practices, such as web scraping content and using them to train their algorithms, has led to numerous copyright infringement lawsuits, mainly in the US (where those companies are legally based). AI systems require inputs to learn and improve their capabilities. If those inputs are copyrighted works, the use of which is not authorised by the copyright owner, it may lead to claims of copyright infringement.

To infringe the author’s exclusive right of reproduction, for instance, it is enough that the copyrighted work is reproduced for training purposes, even if the work does not appear in the output. Currently, more than 30 lawsuits for copyright infringement against AI companies have been filed before US Federal Courts, such as Getty Images v. Stability AI; The New York Times v. OpenAI;  and Authors Guild v. OpenAI.

On the other hand, AI companies argue that their use of copyrighted works for training algorithms falls under fair use, a defence that relies on the interpretation of a four-factor test. For the moment there have been no final decisions.

In the EU, there is no “fair use” system, but a different one, more restrictive, of so-called “exceptions and limitations” (to the copyright owners’ exclusive right) which are limited (numerus clausus) and (usually) restrictively construed by courts. Articles 3 and 4 of the EU’s 2019 Digital Single Market Directive mandated member states to include two Text and Data Mining exceptions (TDME) to allow extraction and reproduction of lawfully accessible copyrighted content without requiring the copyright owner’s prior consent.

The TDME were included to facilitate and support AI development efforts. However, there is a catch. The exception fully applies to scientific research and cultural institutions, but to be used by commercial firms, the copyright owner should not have opted-out from it. So, if copyright owners have opted-out from the TDM exception, any reproduction of their works, for instance, for the purpose of AI training, would be unlawful.

The mechanism to opt out from the TDME, laid out in DSM Article 4(3) is unclear, aggravating legal uncertainty both of copyright owners and AI developers. To ease these concerns some institutions, such as W3C, have suggested some protocols. A German Court has recently ruled that is sufficient that the opt-out is included in natural language in a website’s terms of use. The opt out mechanism is reinforced by the AIA (the EU’s Artificial Intelligence Act), whose article 53 states that providers of general purpose GenAI models must put in place a policy to comply with the opt out mechanism mandated by DSM Article 4(3). Recently, GEMA, a German Collective Licensing Society, filed a lawsuit before the Munich Regional Court to clarify AI providers’ remuneration obligations in Europe for the unlicensed use of protected musical works. This action, likely the first of its kind in Europe but probably not the last, is specifically directed against the US company OpenAI.

Concerning AI-generated outputs, the US Copyright Office has been categorical in denying copyright registration to a work solely generated by an AI system, a decision later upheld in court (Thaler v. Perlmutter). However, the situation is not the same for works containing a mix of AI-generated and human-authored elements, which may be copyrightable, under certain circumstances, according to recent guidelines published by the US Copyright Office.

Taking a different stance, the Beijing Internet Court granted copyright protection to a wholly AI-generated image this year.  In the EU, a case with similar characteristics was decided in the Czech Republic, but it did not provide clear guidelines as it was dismissed on procedural grounds. Despite international copyright treaties establishing some common obligations, copyright law remains a matter of national legislation as interpreted by national courts.

Patents also play a relevant role in the development of AI systems. Similar to the case of copyright, patent applications stating as the inventor an AIS have been denied patents in the US, UK, and EU, among other countries. A WIPO Report highlights that after the introduction of transformer models in 2017, both patent applications and scientific publications increased, with scientific publications exploding after the 2022 release of ChatGPT.

Additionally, trade secrets are commonly used to protect algorithms or other valuable information that either cannot be patented or that firms prefer not to disclose. While trade secrecy is a legitimate practice, it can sometimes impact AI transparency and explainability, requiring careful calibration by the courts, as expressed by Advocate General de la Tour in a case concerning the GDPR.

Could you outline some practical examples of how companies can take to account IP issues to protect their business?

Clare SHIN (CS): As AI becomes more sophisticated and is applied in more areas, the impact of AI on IP creates a constantly changing landscape. Programs designed to protect IP in this environment must be reviewed regularly to adapt to changes in AI capabilities and uses. Close coordination between a company’s legal and IT teams, with the full support of senior management, is required. The legal team must provide the IT team with specific information on what constitutes a violation of IP rights. The IT team must then devise the means to identify any of those violations, and report the violations to the legal team for their action. Senior management must be kept informed by both teams of what is occurring so that action plans can be implemented and resources can be allocated as needed.

Next, conducting regular IP audits should be marked as critical practice. These audits allow companies to identify potential vulnerabilities in their IP portfolio, ensuring that all innovations are adequately documented and protected, and that no violation has been made and widely practiced. These proactive audits help mitigate risks of inadvertent exposure or unauthorized use of proprietary assets. Companies can also regularly monitor the market for IP infringements by leveraging AI-driven tools to detect unauthorized use of technologies or content. Such swift action can empower businesses to act quickly in enforcing their rights, deterring misuse, and preserving value.

Employee education is an often overlooked yet vital component of IP protection. Training employees on IP policies fosters a culture of respect for intellectual property and reduces the risk of internal breaches. Such education ensures that the staff understand their role in maintaining the confidentiality of sensitive information and act ethically and responsibly.

Finally, developing a robust IP protection strategy is essential. The legal team can focus resources on securing patents for AI-related technologies, developing trade secrets in safeguarding proprietary algorithms and datasets, and copyrights to protect AI-generated content, granting ownership and enabling enforcement of creative outputs.

Companies can also ensure confidentiality through strict access controls and contracts such as non-disclosure agreements or cross-licensing agreements with other firms. The benefits include reduced litigation risks and fosters innovation and creativity by allowing mutual use of AI technologies.

What legal obligations do companies have regarding transparency in using AI models


MM:
First, let’s clarify who must comply with the AIA. The bulk of AIA obligations fall on providers, defined in Article 3(3) as natural or legal persons, public or private, that develop AI systems or general-purpose AI models placed on the EU market, even if unpaid. Deployers also have certain obligations. According to Article 3(4), deployers are natural or legal persons, public or private, that use an AI system under their authority, except when the system is used for personal, non-professional activities. Consequently, a university professor using an AI system to prepare lectures would be subject to the AIA. Finally, the AIA imposes some obligations on product manufacturers, importers, and distributors.

AI transparency has become a widespread demand, and in certain contexts, such as under the AIA, it may even be a legal requirement. Generally speaking, AI transparency is considered an ethical or good practice requirement. AI transparency and responsible disclosure are among the OECD’s AI recommendations, a non-binding source of soft law. Accordingly, AI firms should disclose “meaningful information, appropriate to the context, and consistent with the state of the art.”

Leaving aside ethical and voluntary commitments, various legal rules in the EU mandate different degrees of transparency concerning AIS and personal data when they serve as inputs. For instance, the duty to inform under Articles 13 and 14 of the GDPR may apply to AIS developers. Recital 27 of the AIA states that “transparency means that AI systems are developed and used in a way that allows appropriate traceability and explainability, while making humans aware that they communicate or interact with an AI system, as well as duly informing deployers of the capabilities and limitations of that AI system and affected persons about their rights.” The AIA emphasizes transparency through various provisions, such as Article 13 (instructions for use concerning high-risk systems) and Article 50 (obligations to inform users they are interacting with an AI system).

However, full transparency may not always be required or feasible. Instead, firms should strive to adopt a realistic transparency standard while complying with applicable legal rules. Best practices for companies to consider regarding transparency obligations include comprehensive logging and tracking of all data and AI processing activities, as well as maintaining clear documentation.

Some sectors are heavily regulated (for example healthcare, finance etc). How can organizations/companies in these sectors ensure their AI adoption aligns with regulations in their sector and that they are prepared for potential backlash if AI makes errors? (liability)

MM: For already highly regulated sectors, laws like the AIA add another layer of complexity to their compliance strategies. For instance, most AI systems in healthcare and finance may be classified as high-risk under the EU’s AI Act, subjecting them to stringent compliance requirements. To ensure AI adoption aligns with other sectoral regulations, firms in these sectors may extrapolate valuable lessons from compliance with the GDPR, to which they are also subject due to the processing of sensitive personal data.

In cases AI errors that cause harm to clients or other individuals, companies are subject to the common rules of tort law (in common law countries) or extracontractual liability (in civil law countries). Therefore, the best policy is to invest in prevention, as it is more cost-effective than remediation and helps preserve a company’s reputation.

Over and above legal concerns – AI also poses many ethical questions for companies. What can companies do to ensure they address both the ethical and legal challenges posed by this technology?

MM: It is important to distinguish between legal and ethical obligations. Legal obligations are imposed by governments, and non-compliance results in sanctions. In contrast, ethical rules, arising voluntarily from the consensus of the parties involved, usually not lead to legal sanctions.

Sometimes, laws may incorporate ethical standards, transforming them into legal obligations. For instance, the AI Act imposes obligations of transparency and explainability, which were ethical ‘desiderata’ before, to minimize the risk of algorithmic biases and discrimination, among others.

While adopting higher ethical standards beyond legal and regulatory compliance is optional, it can be a winning strategy, especially in the globally fragmented AI legal environment. Raising the bar of corporate responsibility to include the highest ethical standards can help minimize the risk of non-compliance with legal rules across different jurisdictions and positively impact reputation. However, it is important to note that this approach comes with additional costs.

Finally, some researchers suggest that human rights may provide a better framework for addressing AI challenges, especially since AI is a phenomenon that transcends national borders, thus calling for international solutions. In September 2024, the Council of Europe opened for signature the Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law, the first international legally binding treaty in this field. This treaty aims to ensure that AI system activities are fully consistent with human rights, democracy, and the rule of law, while also promoting technological progress and innovation. It has already been signed by the US, the UK, and the European Union, among others.


Category (ies)

Big Data & AILaw


Contributors

IÉSEG Insights

IÉSEG Insights

Editorial

Full biography