WIPO: Generative AI and IP 

Applications of generative AI (genAI) like ChatGPT and Midjourney have evolved from objects of AI research into mainstream use. However, only a limited number of companies have the capabilities and resources to develop these kinds of applications. Three elements are essential: capable AI engineers, lots of computing power and lots of data. WIPO focuses on this last element in a recent publication on genAI. 

Many organisations who use genAI do not develop it themselves but get it from a third party like Microsoft or OpenAI. This means that the majority of organisations has little to no control over how their genAI applications are trained and the data which is being used for that. WIPO sees this as a source for possible risks, both general and business related. They also see that the usage terms of the AI application can have a big impact on this. 

The report identifies the following general points to consider when determining which genAI application to use: 

  • What is the application capable of and what is its specialisation? 
  • Under which terms is the application being offered? Which guarantees does the provider give and what is being excluded? 
  • How was the training data collected and what guarantees does the provider give about the training data? 
  • How is the performance of the application controlled and how does it deal with inappropriate or illegal outputs? 
  • Was the application developed under specific laws and does it comply with them 
Confidential information 

If the provider of the AI system uses user inputs for further development, it is possible that confidential information entered into the system can lose its confidentiality and become accessible to other customers of the provider. Prompts might also be stored for quality assurance or to check for inappropriate use, which can also result in the loss of confidentiality. To mitigate these risks, ensure that the provider does not use your prompts for further training and consider deploying a local version. A staff policy can also help to keep confidential information out of prompts. 

Infringement on intellectual property 

For many genAI applications, it is assumed that a big part of their training data was covered by intellectual property. This is a source for ongoing legal disputes, with no clear outcome yet. There is no certainty yet what the interplay between genAI and IP rights will be. However, using applications which were developed with respect for IP is a safer approach. Some jurisdictions include exceptions for data mining which might be applicable for training AI, but this has not been tested in a court yet.  

To mitigate these risks, only use IP-compliant tools trained on licensed or open data, seek indemnities from providers, vet training datasets, and implement technical measures. Regulatory obligations for disclosing IP-protected items used in training may emerge in the near future. Staff policies and can help to minimize the risk of producing infringing outputs, and measures like plagiarism checks can be implemented. 

Deepfake content 

Some genAI applications can mimic the faces or voices of people and generate video or audio. Unauthorized use of someone likeness or voice is often prohibited on multiple grounds, both privacy and intellectual property related. Getting caught mimicking someone without their knowledge can result in both reputational harm and legal issues.  

To prevent problems with deepfake genAI, a staff policy restricting the use of these applications of genAI can help, as well as restricting the use of personal data when using genAI. In case of a legitimate application for mimicry, make sure that the person in question approves or be ready to defend your actions. 

IP rights in AI and protection of AI outputs 

The protection of content generated by AI tools is not clear, nor is it clear who should hold these IP rights. Contracts might give some certainty for the time being, especially between you and a provider of a genAI application. Most jurisdictions are very clear that IP rights need to be credited to a human being, although afterwards they can be owned by legal entities. Be sure to take a look at the terms of a prospective genAI application to see what the provider thinks about the IP rights of outputs of the system. The role of humans who change the output before publication and during generation can also help to prove IP in case of a legal dispute. Lastly, consider not using genAI in cases where IP protection is crucial for your business. 

Concrete steps 

With these risks in mind, you can focus on the following areas to manage these risks: 

  • Implement a staff policy and training to guide the use of genAI. 
  • Determine the legal risks and weigh them against the business case. 
  • Monitor the AI tools used and if needed provide specific instructions on their use. 
  • Review the terms and conditions of external AI tools and their usage instructions. 
  • Keep records of the usage of AI tools and the further adaptation by humans. 
  • If possible, assess the training data and training methods. 
  • Determine the relationship between you and the provider with respect to IP rights and ownerships. 
Details
  • Created 13-03-2024
  • Last Edited 13-03-2024
  • Subject Using AI
More questions?

If you were not able to find an answer to your question, contact us via our member-only helpdesk or our contact page.

Recent Articles