Jade Leung (engineer)

Last updated

Jade Leung
Jade Leung 2015 (cropped).jpg
Leung in 2015
OccupationResearcher
AwardsRhodes scholarship, 2016
Academic background
Education University of Auckland
University of Cambridge
Alma mater University of Oxford

Jade Leung is the Chief Technology Officer of the United Kingdom's AI Safety Institute, where she designs and oversees safety evaluations for frontier AI models. [1] [2]

Until October 2023, she was Governance Lead at OpenAI, [3] focusing on the safe development of artificial intelligence, particularly on regulation and safety protocols related to advancements in artificial general intelligence. [4] She also co-founded and was the inaugural Head of Research and Partnerships at the Centre for the Governance of AI at the University of Oxford. [5]

Career

Leung graduated from the University of Auckland in New Zealand with a Bachelor of Engineering (Honours) with First Class Honours in Civil Engineering in 2015. [6] She won a Rhodes Scholarship in 2016. [7] She completed her DPhil in International Relations in the Department of Politics and International Relations at the University of Oxford in 2019. [8] Her PhD thesis was on "Who will govern artificial intelligence? Learning from the history of strategic politics in emerging technologies". [9]

In October 2023, she left OpenAI to join the United Kingdom's AI Safety Institute, where she serves as the chief technology officer. She primarily leads the work on evaluations, focusing on developing empirical tests for dangerous capabilities and safety features of frontier AI systems. [1]

In 2024, she was included in Time's list of the "100 Most Influential People in AI". [1]

Related Research Articles

Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.

<span class="mw-page-title-main">Geoffrey Hinton</span> British computer scientist (born 1947)

Geoffrey Everest Hinton is a British-Canadian computer scientist, cognitive scientist, cognitive psychologist, and Nobel Prize winner, known for his work on artificial neural networks which earned him the title as the "Godfather of AI".

A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.

Ian Hogarth is an investor and entrepreneur. He co-founded Songkick in 2007 and Plural Platform in 2021. Hogarth is the current Chair of the UK Government's AI Foundation Model Taskforce, which conducts artificial intelligence safety research.

Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.

Shane Legg is a machine learning researcher and entrepreneur. With Demis Hassabis and Mustafa Suleyman, he cofounded DeepMind Technologies, and works there as the chief AGI scientist. He is also known for his academic work on artificial general intelligence, including his thesis supervised by Marcus Hutter.

In the field of artificial intelligence (AI), AI alignment aims to steer AI systems toward a person's or group's intended goals, preferences, or ethical principles. An AI system is considered aligned if it advances the intended objectives. A misaligned AI system pursues unintended objectives.

The artificial intelligenceindustry in China is a rapidly developing multi-billion dollar industry. The roots of China's AI development started in the late 1970s following Deng Xiaoping's economic reforms emphasizing science and technology as the country's primary productive force.

<span class="mw-page-title-main">Artificial intelligence in government</span> Use of AI in government areas

Artificial intelligence (AI) has a range of uses in government. It can be used to further public policy objectives, as well as assist the public to interact with the government. According to the Harvard Business Review, "Applications of artificial intelligence to the public sector are broad and growing, with early experiments taking place around the world." Hila Mehr from the Ash Center for Democratic Governance and Innovation at Harvard University notes that AI in government is not new, with postal services using machine methods in the late 1990s to recognise handwriting on envelopes to automatically route letters. The use of AI in government comes with significant benefits, including efficiencies resulting in cost savings, and reducing the opportunities for corruption. However, it also carries risks.


The AI market in India is projected to reach $8 billion by 2025, growing at a compound annual growth rate (CAGR) of over 40% from 2020 to 2025. This growth is part of the broader AI boom, a global period of rapid technological advancements starting in the late 2010s and gaining prominence in the early 2020s. Globally, breakthroughs in protein folding by Google DeepMind and the rise of generative AI models from OpenAI have defined this era. In India, the development of AI has been similarly transformative, with applications in healthcare, finance, and education, bolstered by government initiatives like NITI Aayog's 2018 National Strategy for Artificial Intelligence.

Regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI). It is part of the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct enforcement power like the IEEE or the OECD.

Mariarosaria Taddeo is an Italian philosopher working on the ethics of digital technologies. She is Professor of Digital Ethics and Defence Technologies at the Oxford Internet Institute, University of Oxford and Dslt Ethics Fellow at the Alan Turing Institute, London.

<span class="mw-page-title-main">Anja Kaspersen</span> Norwegian diplomat and academic

Anja Kaspersen is a director for Global Markets Development, New Frontiers and Emerging Spaces at IEEE, the world's largest technical professional organisation. Kaspersen is also a senior fellow at Carnegie Council for Ethics in International Affairs where she co-directs the Artificial Intelligence Equality Initiative with Wendell Wallach. With scholars and thinkers in the field of technology governance, supported by Carnegie Council for Ethics in International Affairs and IEEE, Kaspersen and Wallach provided a Proposal for International governance of AI.

Latifa Mohammed Al-Abdulkarim is a Saudi Arabian computer scientist and professor working on AI ethics, legal technology, and explainable AI. She is currently an assistant professor of computer science at King Saud University and visiting researcher in artificial intelligence and law at the University of Liverpool. Al-Abdulkarim has been recognized by Forbes as one of the “women defining the 21st century AI movement” and was selected as one of the 100 Brilliant Women in AI Ethics in 2020.

<span class="mw-page-title-main">Adam H. Russell</span> American anthropologist

Adam H. Russell is an American anthropologist who serves as Chief Vision Officer of the U.S. AI Safety Institute. He previously served as the acting deputy director of the Advanced Research Projects Agency for Health.

<span class="mw-page-title-main">Elham Tabassi</span>

Elham Tabassi is an engineer and government leader. She was listed on the inaugural TIME100 Most Influential People in AI. Tabassi led the creation of the United States Artificial Intelligence Risk Management Framework, adopted by both industry and government. Tabassi was selected to serve on the National Artificial Intelligence (AI) Research Resource Task Force. Tabassi began her career in government at the National Institute of Standards and Technology, pioneering various machine learning and computer vision projects with applications in biometrics evaluation and standards, included in over twenty five publications. Her research has been deployed by the FBI and Department of Homeland Security.

Helen Toner is an Australian researcher, and the director of strategy at Georgetown’s Center for Security and Emerging Technology. She was a board member of OpenAI when CEO Sam Altman was fired.

An AI Safety Institute (AISI), in general, is a state-backed institute aiming to evaluate and ensure the safety of the most advanced artificial intelligence (AI) models, also called frontier AI models.

Elle Farrell-Kingsley is a British futurist, journalist, and interdisciplinary researcher specializing in artificial intelligence (AI) ethics, emerging technologies, and policy advocacy.

References

  1. 1 2 3 Perrigo, Billy (2024-09-05). "TIME100 AI 2024: Jade Leung". TIME. Retrieved 2024-09-19.
  2. "Fourth progress report". May 20, 2024.
  3. "OECD AI Policy Observatory Portal". oecd.ai. Retrieved 2024-05-18.
  4. Monk, Felicity (November 25, 2022). "Brave New Zealand World: A chat with AI expert Dr Jade Leung". stuff.co.nz. Retrieved 2024-05-18.
  5. "Jade Leung". University of Oxford. Retrieved 2024-05-18.
  6. "Graduate Search". University of Auckland. Retrieved 2024-05-18.
  7. "Jade Leung". Rhodes Trust. Retrieved 2024-05-18.
  8. "Jade Leung - Future of Humanity Institute". 2023-04-08. Archived from the original on 2023-04-08. Retrieved 2024-05-18.
  9. Leung, Jade (July 2019). "Who Will Govern Artificial Intelligence? Learning From the History of Strategic Politics in Emerging Technologies". Oxford University Research Archive. Retrieved 2024-09-19.