Dr Michaela MacDonald Lecturer, The School of Electronic Engineering and Computer Science Queen Mary University of London AI, Law and Governance Lecture 5 Discussion The fair use doctrine says that brief excerpts of copyright material may, under certain circumstances, be quoted for purposes such as criticism, news reporting, teaching, and research, without the need for permission from or payment to the copyright holder. Should AI companies be allowed to use works under copyright protection without consent? Assessment • Written assignment on one of the following topics • 1,500 words, including footnotes • Use a referencing system of choice, accurately and consistently • Deadline – 26th May 2024 midnight • If you use any AI generative tools in the process of writing the coursework, please acknowledge and identify the outputs. Topic 1 Synthetic voice technology can be used to clone the voices of actual humans as well as generate synthetic voices, creating ‘deepfakes’ that can be used for fraud, identity theft, and financial scams. Spotify's new AI Voice Translation feature enables for selected podcasts to be translated into other languages, not by speakers of that language, but in synthetic AI voices that match the original speaker's style, creating a “more authentic listening experience that sounds more personal and natural than traditional dubbing”. Discuss the current state of the law in ONE jurisdiction of your choice and how it may protect the use of voice for commercial purposes. Topic 2 Countries worldwide are designing and implementing AI governance laws and regulations, including the development of comprehensive legislation, focused legislation for specific use cases, national AI strategies or policies, and voluntary guidelines and standards. Compare and contrast TWO different jurisdictions of your choice with regards to the regulatory approach to AI. Liability • Two primary reasons the law imposes liability: • to deter deleterious action or inaction; and • to compensate victims when harm occurs • Subjective liability is based on a failure of care on the part of the defendant, usually described as negligence • Tort of negligence • Objective liability • Contractual liability • Statutory liability Tort Law – Negligence • To establish a claim, a plaintiff must prove: 1. the existence of a legal duty on the part of the defendant not to expose the plaintiff to unreasonable risks 2. a breach of the duty – a failure on the part of the defendant as act reasonably, 3. a causal connection between defendant’s conduct and plaintiff’s harm and 4. actual harm to the plaintiff resulting from the defendant’s negligence • A person can be held liable only when they should reasonably have foreseen that their negligent act would endanger others (foreseeability) Statutory and contractual liability • For example, the EU Product Liability Directive a regime of strict liability for defective products • Consumer protection law also creates a strict, no-fault liability: if a product is defective, the manufacturer has responsibility AI – Product or service • Liability standards are different for each: liability is absolute for products, and fault-based for services • AI – a product or service? • If AI is a product and the liability arises in use, the question is whether the user was negligent • If the user is negligent, then the question is whether there was contributory negligence • If the liability arises in creation, then in theory liability is objective, but still comes down to a question of negligence at the design, training and testing stage • If AI is a service, then the question is simply whether the service was negligent Explainable AI (XAI) • XAI is a set of tools and frameworks to help developers / users / regulators to understand and interpret outputs made by machine learning models • Explanations can take many different forms • text explanations, visualizations, local explanations, explanations by example, explanations by simplification, and feature relevance Explainable AI (XAI) AI and Liability • United States • Consumer protection law • Traditional product liability • EU • Proposed AI Liability Directive (AILD) and • Proposed revisions to the Product Liability Directive (PLD) EU Product Liability Directive • Includes AI systems within the scope of the Directive, by virtue of the inclusion of “software” within the definition of “product” = AI system providers will potentially be liable for any defective AI systems that are placed on the market • The PLD provides for recovery of any material losses that result from a product defect, whilst compensation for non-material losses falls to the laws of each Member State • Updated concept of deffectiveness EU AI Liability Directive • Liability is seen as one of the top three barriers to the use of AI by European countries • It lays down uniform rules for access to information and alleviation of the burden of proof in relation to damages caused by AI systems, establishing broader protection for victims, and fostering the AI sector by increasing guarantees • Key points: 1. Rebuttable presumption of causality 2. Access to relevant evidence Legal status of ‘electronic persons’ • European Parliament Resolution on Civil Law Rules of Robotics (2018), and its recommendation to the European Commission in paragraph 59 f): “Creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently;” Response • The objective of the proposal is to resolve the issue of liability, legal agency to enter into contracts, acquire IPRs • What are the technical, legal and ethical justifications for creating legal personality? • Technical • Super-intelligence and autonomy – a realistic prediction or an overvaluation of the actual capabilities of even the most advanced AI systems and robots? • Legal and ethical • Is there a legal theory that can support creating a separate legal entity for intelligent autonomous agents? Digital peculium • Under Roman law, slaves lacked legal personality and yet carried bulk of the commercial activities on behalf of their masters • Peculium was a bundle of assets allocated to slaves to carry out specific activities on behalf of their master • Digital Peculium is a special set of rules that would define the parameters of liability for autonomous agents in the context of commercial transactions Case study Consider a common-law jurisdiction scenario in which there are complications after a surgery undertaken by an AI. The surgical AI removes a section of the bowel from a patient with bowel cancer. After the surgery, the patient has an unusual amount of nerve damage in the pelvic area, decreasing the patient’s quality of life. In attempting to claim for damage allegedly caused by the AI, the fault of the provider (or user) must be demonstrated or presumed by the court. The court is unlikely to assume that surgical complications provide an inherent presumption of fault, particularly where the type - nerve damage - is known, even if the level unexpectedly high - is not. The patient or claimant must then prove fault (a concept the Directive leaves to Member States) on behalf of either the producer or user. Thank you!