Jade Leung | |
---|---|
Occupation | Researcher |
Awards | Rhodes scholarship, 2016 |
Academic background | |
Education | University of Auckland University of Cambridge |
Alma mater | University of Oxford |
Jade Leung is the Chief Technology Officer of the United Kingdom's AI Safety Institute, where she designs and oversees safety evaluations for frontier AI models. [1] [2]
Until October 2023, she was Governance Lead at OpenAI, [3] focusing on the safe development of artificial intelligence, particularly on regulation and safety protocols related to advancements in artificial general intelligence. [4] She also co-founded and was the inaugural Head of Research and Partnerships at the Centre for the Governance of AI at the University of Oxford. [5]
Leung graduated from the University of Auckland in New Zealand with a Bachelor of Engineering (Honours) with First Class Honours in Civil Engineering in 2015. [6] She won a Rhodes Scholarship in 2016. [7] She completed her DPhil in International Relations in the Department of Politics and International Relations at the University of Oxford in 2019. [8] Her PhD thesis was on "Who will govern artificial intelligence? Learning from the history of strategic politics in emerging technologies". [9]
In October 2023, she left OpenAI to join the United Kingdom's AI Safety Institute, where she serves as the chief technology officer. She primarily leads the work on evaluations, focusing on developing empirical tests for dangerous capabilities and safety features of frontier AI systems. [1]
In 2024, she was included in Time's list of the "100 Most Influential People in AI". [1]
Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.
Sir Demis Hassabis is a British Nobel laureate, computer scientist, artificial intelligence researcher, entrepreneur, and chess player. In his early career, he was a video game AI programmer and designer, and an expert board games player. He is the chief executive officer and co-founder of Google DeepMind and Isomorphic Labs, and a UK Government AI Adviser.
The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.
Ian Hogarth is an investor and entrepreneur. He co-founded Songkick in 2007 and Plural Platform in 2021. Hogarth is the current Chair of the UK Government's AI Foundation Model Taskforce, which conducts artificial intelligence safety research.
The Future of Life Institute (FLI) is a nonprofit organization which aims to steer transformative technology towards benefiting life and away from large-scale risks, with a focus on existential risk from advanced artificial intelligence (AI). FLI's work includes grantmaking, educational outreach, and advocacy within the United Nations, United States government, and European Union institutions.
Existential risk from AI refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.
Shane Legg is a machine learning researcher and entrepreneur. With Demis Hassabis and Mustafa Suleyman, he cofounded DeepMind Technologies, and works there as the chief AGI scientist. He is also known for his academic work on artificial general intelligence, including his thesis supervised by Marcus Hutter.
The artificial intelligenceindustry in China is a rapidly developing multi-billion dollar industry. The roots of China's AI development started in the late 1970s following Deng Xiaoping's economic reforms emphasizing science and technology as the country's primary productive force.
David L. Shrier is an American futurist, author and entrepreneur. Shrier also co-edits the MIT Connection Science imprint of MIT Press and is the author of various industry reference books in the fields of financial technology, digital identity, data governance and financial innovation. Shrier is the cofounder & Managing Director of Esme Learning, an artificial intelligence-enabled digital learning company derived from MIT research and spun out of MIT Media Lab. He currently serves as Professor of Practice in Artificial Intelligence and Innovation with Imperial College London.
Limor Shmerling Magazanik is a thought leader in digital technology policy, ethics and regulation. She is an expert in data governance, privacy, AI ethics and cybersecurity policy. Since November 2018, she has been the managing director of the Israel Tech Policy Institute (ITPI) and a senior fellow at the Future of Privacy Forum. She is a visiting scholar at the Duke University Sanford School of Public Policy. Previously, for 10 years, she was director at the Israeli Privacy Protection Authority and an adjunct lecturer at the Hebrew University of Jerusalem Faculty of Law and the Interdisciplinary Center Herzliya School of Law and a research advisor at the Milken Innovation Center. Her background also includes positions in the private sector, law firms and high-tech industry. She has promoted policy initiatives in various technology sectors and has been an advocate for compliance with data protection and privacy by design.
Regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI). It is part of the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct enforcement power like the IEEE or the OECD.
Mariarosaria Taddeo is an Italian philosopher working on the ethics of digital technologies. She is Professor of Digital Ethics and Defence Technologies at the Oxford Internet Institute, University of Oxford and Dslt Ethics Fellow at the Alan Turing Institute, London.
Anja Kaspersen is a director for Global Markets Development, New Frontiers and Emerging Spaces at IEEE, the world's largest technical professional organisation. Kaspersen is also a senior fellow at Carnegie Council for Ethics in International Affairs where she co-directs the Artificial Intelligence Equality Initiative with Wendell Wallach. With scholars and thinkers in the field of technology governance, supported by Carnegie Council for Ethics in International Affairs and IEEE, Kaspersen and Wallach provided a Proposal for International governance of AI.
Latifa Mohammed Al-Abdulkarim is a Saudi Arabian computer scientist and professor working on AI ethics, legal technology, and explainable AI. She is currently an assistant professor of computer science at King Saud University and visiting researcher in artificial intelligence and law at the University of Liverpool. Al-Abdulkarim has been recognized by Forbes as one of the “women defining the 21st century AI movement” and was selected as one of the 100 Brilliant Women in AI Ethics in 2020.
Adam H. Russell is an American anthropologist who serves as Chief Vision Officer of the U.S. AI Safety Institute. He previously served as the acting deputy director of the Advanced Research Projects Agency for Health.
AI safety is an interdisciplinary field focused on preventing accidents, misuse, or other harmful consequences arising from artificial intelligence (AI) systems. It encompasses machine ethics and AI alignment, which aim to ensure AI systems are moral and beneficial, as well as monitoring AI systems for risks and enhancing their reliability. The field is particularly concerned with existential risks posed by advanced AI models.
Elham Tabassi is an engineer and government leader. She was listed on the inaugural TIME100 Most Influential People in AI. Tabassi led the creation of the United States Artificial Intelligence Risk Management Framework, adopted by both industry and government. Tabassi was selected to serve on the National Artificial Intelligence (AI) Research Resource Task Force. Tabassi began her career in government at the National Institute of Standards and Technology, pioneering various machine learning and computer vision projects with applications in biometrics evaluation and standards, included in over twenty five publications. Her research has been deployed by the FBI and Department of Homeland Security.
Helen Toner is an Australian researcher, and the director of strategy at Georgetown’s Center for Security and Emerging Technology. She was a board member of OpenAI when CEO Sam Altman was fired.
Discussions on regulation of artificial intelligence in the United States have included topics such as the timeliness of regulating AI, the nature of the federal regulatory framework to govern and promote AI, including what agency should lead, the regulatory and governing powers of that agency, and how to update regulations in the face of rapidly changing technology, as well as the roles of state governments and courts.
An AI Safety Institute (AISI), in general, is a state-backed institute aiming to evaluate and ensure the safety of the most advanced artificial intelligence (AI) models, also called frontier AI models.