Dr Michaela MacDonald Lecturer, The School of Electronic Engineering and Computer Science Queen Mary University of London AI, Law and Governance Lecture 1 What are you going to learn about? AI, Law, and Governance • Conceptualise the meaning and implications of artificial intelligence • Identify possible risks associated with the design, development, and deployment of AI • Examine the wide range of legal, regulatory and private governance approaches from around the world (EU, UK, US, China, Singapore) to address the risks while fostering innovation • Consider the role of ethics and social responsibility The module aims to… • provide you with understanding of legal, regulatory, and practical challenges in relation to design, development and deployment of AI, including accuracy, accountability, fairness, and discrimination • introduce legal, regulatory as well as non-regulatory mechanisms (standards and best practices) with regards to design, development and deployment of AI technologies across different sectors • enable you to reflect on the benefits and limitations of different governance frameworks in relation to AI Module outline Lecture 1 – Introduction – Definitions, challenges, governance Lecture 2 – AI Ethics Lecture 3 – AI and Data Lecture 4 – Generative AI and IP Lecture 5 – AI and Liability Module structure • In-person lectures, attendance is compulsory • Lecture slides, recommended reading and further learning activities will be available on IS after each lecture • In-class activity at the end of each lecture • A written assessment, the instructions and deadline will be discussed in Lecture 5 About me • Graduated from MUNI in 2009 • Lecturer at Queen Mary University, the Centre for Commercial Law Studies (CCLS) and the School of Electronic Engineering and Computer Science (EECS) • Teaching Cybersecurity Law and Product Development; Interactive Entertainment Law; AI, Governance and Law • London LLM, DL and Joint Programme with BUPT in China • MTJG conference series, IELR • Current research focuses on on the impact of laws, norms and environmental constraints on users’ behaviour and interactions in Cyberspace • Contact: michaela.macdonald@qmul.ac.uk Lecture 1 – Introduction Question What does Artificial Intelligence mean to you? Go to www.menti.com and use the code 8361 0195 AI – Definitions (Business) • “… technologies emerging today that can understand, learn, and then act based on that information” (PWC’s definition) • “… anything that makes machines act more intelligently” (IBM’s definition) • “… a constellation of technologies that extend human capabilities by sensing, comprehending, acting and learning — allowing people to do much more” (Accenture’s definition) • “… getting computers to do tasks that would normally require human intelligence” (Deloitte’s definition) • “… the ability of machines to exhibit human-like intelligence” (McKinsey’s definition) AI – Definitions (Media) • “… the replication of human analytical and decision-making capabilities” Steven Finlay (Author of Artificial Intelligence and Machine Learning for Business, 2017) • “… intelligence demonstrated by a machine or by software…[where] intelligence measures an agent’s general ability to achieve goals in a wide range of environment” (Calum Chase, author of Surviving A.I.) • “Defining artificial intelligence isn't just difficult; it's impossible, not the least because we don't understand human intelligence. Paradoxically, advances in AI will help more to define what human intelligence isn't than what artificial intelligence is.” — O’Reilly • “… the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings” (definition in Encyclopedia Britannica by Prof. B.J. Copeland) • “… a set of computer science techniques that enable systems to perform tasks normally requiring human intelligence” Economist Intelligence Unit’s definition • “The scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines.” AITopics.org AI – Definitions (Academia) • “It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but A.I. does not have to confine itself to methods that are biologically observable.” Stanford • “Artificial Intelligence is the study of man-made computational devices and systems which can be made to act in a manner which we would be inclined to call intelligent.” The University of Louisiana at Lafayette AI – Definitions (Pioneers and experts) • “The human brain is just a really fancy computer,” said Jeff Dean, a Google hardware engineer and AI expert, at a company event in 2016. “It’s a computer made of meat.” • “… the science and engineering of making intelligent machines“… “[where] intelligence is the computational part of the ability to achieve goals in the world” (original definition by John McCarthy, who coined the term' Artificial Intelligence in 1955) • AI refers to “thinking machines” that are capable of simulating every aspect of learning or any other feature of intelligence → Being human-like rather than becoming human • To paraphrase A. C. Clarke, any sufficiently advanced algorithm will be indistinguishable from magic to those who do not understand it AI – Terms and concepts • Machine learning (ML): A subset of AI that often uses statistical techniques to give machines the ability to “learn” from data without being explicitly given the instructions for how to do so. This process is known as “training” a “model” using a learning “algorithm” that progressively improves model performance on a specific task. • Supervised learning: A model attempts to learn to transform one kind of data into another kind of data using labelled examples. This is the most common kind of ML algorithm today. • Unsupervised learning: A model attempts to learn a dataset’s structure, often seeking to identify latent groupings in the data without any explicit labels. The output of unsupervised learning often makes for good inputs to a supervised learning algorithm at a later point. AI – Terms and concepts • Reinforcement learning (RL): An area of ML concerned with developing software agents that learn goal-oriented behaviour by trial and error in an environment that provides rewards or penalties in response to the agent’s actions (called a “policy”) towards achieving that goal. • Deep learning (DL): An area of ML that attempts to mimic the activity in layers of neurons in the brain to learn how to recognise complex patterns in data. The “deep” in deep learning refers to the large number of layers of neurons in contemporary ML models that help to learn rich representations of data to achieve better performance gains. AI – Terms and concepts • Algorithm: An unambiguous specification of how to solve a particular problem. • Model: Once a ML algorithm has been trained on data, the output of the process is known as the model. This can then be used to make predictions. • Natural language processing (NLP): Enabling machines to analyse, understand and manipulate language. © Medium 2022 Question Which AI-driven applications / tools have you used today? Go to www.menti.com and use the code 8361 0195 Examples of real-life applications • ID based on facial recognition software • Personalised content on your social media accounts • Choose-your-own-adventure-style text games with AI dungeon masters • Gmail’s Smart Reply, Grammarly and similar features • Google search • Digital voice assistants like Siri and Alexa • Smart home devices • Travel apps • Amazon recommendations State of AI 2024 • AI beats humans on some tasks, but not on all. • Industry continues to dominate frontier AI research. • Frontier models get way more expensive. • The United States leads China, the EU, and the UK as the leading source of top AI models. • Robust and standardized evaluations for LLM responsibility are seriously lacking. • The data is in: AI makes workers more productive and leads to higher quality work. • People across the globe are more cognizant of AI’s potential impact—and more nervous. THE AI INDEX REPORT, Stanford 2024 From technological optimism.. ..to caution Stanford University 2023 AI Index Question What do you think the main challenges and risks are that AI governance should address? Go to www.menti.com and use the code 8361 0195 Challenges and risks • Lack of diversity and inclusivity • Lack of transparency • AI-encoded bias (gender, race, sexuality, wealth or income) and discrimination • Lack of privacy • Dark patterns • Technology companies are the gatekeepers, but they often pursue corporate interests • Technology for profit vs technology for the benefit of the society AI Governance • Permissive versus prohibitive laws / no regulation / self-regulation, regulatory guidance / standards / best practice • Application of hard law and soft law • Plurality of governance models • Evaluating their efficiency and suitability with regards to a specific industry and application • Different approaches across US, China, UK and EU • How much regulation is enough? Discussion • Does AI bring fundamentally new and unique challenges? • Can we address the gap between law and technology? • How can we effectively govern a fast evolving, dynamic concept such as AI? • Who are the relevant stakeholders that should be involved in the process? Thank you!