Artificial intelligence – managing the risks

AI is big news - but it raises big legal questions. Here we consider: how can businesses protect their rights in content created by AI? And how can we ensure that AI systems are used responsibly?

 

CO-AUTHORS

Giles Pratt, Partners, Corporate and M&A, Data Regulation and Cyber, Intellectual Property, Freshfields Bruckhaus Deringer

Emily Rich and Prudence Buckland, Associates, Corporate and M&A, Freshfields Bruckhaus Deringer

 

Who owns the output of AI?

Businesses using AI will often want to ensure that they own anything the AI creates. IP law is of course capable of offering a solution, typically through the law of copyright, but does copyright exist in AI-generated content?

Under UK law, in order for a work to get copyright protection it must be ‘original’. The courts have interpreted this to mean that the author must have created the work through their own skill, judgement and effort. Similar concepts apply at the EU level. This suggests that work created by a non-human author may not be protected by copyright.

Even if copyright does arise in an AI-generated work, there is then the question of who owns it. Under UK law, ownership of a computer- generated work lies with the person who makes the arrangements necessary to create it. Traditionally, this has meant the human author who used the software. But when the role of the computer is upgraded from assistant to producer, the position is less clear. In cases of ‘simple’ AI it seems likely that the human using or directing the software would own the copyright. But in a world where AI that can make unsupervised decisions based on ‘deep learning’ from previous data sets, things become morecomplex.

The EU Commission has identified the need to tackle these new legal challenges, so we might get some clarity in the future.

In the meantime, businesses that own or license AI should bolster their position using contractual protections. On a licensing deal, the parties should specify who owns the rights (if any) in the AI output – and also in any ‘learning’ enhancements to the AI itself that are generated by its analysis of the licensee’s own data sets.

How can we ensure AI is used responsibly?

Licences also provide a route to managing other risks of using AI. One risk that has provoked intense debate is the potential for AI systems to amplify bias – machine-learning algorithms are only as good as the data they are ‘trained’ on and, if they are applied to scenarios for which they weren’t designed, that risk increases. There’s also the risk of AI being used for nefarious purposes.

At present there are no laws to mitigate these risks - but a partial solution might be the Responsible AI Licenses initiative (RAIL), which was set up by researchers from Google, Microsoft and IBM. RAIL’s objective is to ensure that AI developers retain enough control to ensure their AI is not used in an ‘irresponsible and harmful’ manner.

So far, RAIL has developed two licences: an end-user licence and a source code licence. The terms of the end-user licence prohibit the AI software from being used for hacking or criminal profiling, among other things. The source code licence prevents AI code from being used for surveillance, computer-generated ‘deepfakes’, healthcare services (including insurance premiums) and criminal profiling. The licence terms are incorporated into any source code modules before they are released, and anyone using the software or the underlying code will be contractually obliged to comply.

The licences are not mandated by law and therefore their take- up rate is likely to hinge on their perceived effectiveness. It’s also unclear whether developers will have the resources and technical ability to monitor compliance with the licence terms, and whether the licences are appropriate for all jurisdictions. Perhaps RAIL, or another independent organisation, will support monitoring and compliance in a similar way to the Open Source Institute. We might also see new laws to encourage or mandate RAIL licences - although governments so far have focused on promoting AI investment and innovation, rather than prescribing how it should be commercialised.

Giles Pratt / Emily Rich / Prudence Buckland Freshfields Bruckhaus Deringer

For more information on the legal issues around AI, please see our AI hub.


Published 2019


Previous
Previous

Eurozone companies take charge of their destinies with help from M&A

Next
Next

Evolving approaches of Japanese MNCs to global management: implications for European subsidiaries and JV partners