![]() | This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these template messages)
|
Terah Lyons | |
---|---|
![]() AI for Good Global Summit 2018 | |
Born | Palm Springs, California |
Education | Harvard University |
Occupation | Executive Director |
Employer(s) | Partnership on AI Office of Science and Technology Policy, 2015–2017 |
Known for | Artificial Intelligence, Technology Policy |
Awards | Thouron Award |
Website | terahlyons |
Terah Lyons is known for her work in the field of artificial intelligence and technology policy. Lyons was the executive director of the Partnership on AI and was a policy advisor to the United States Chief Technology Officer Megan Smith in President Barack Obama's Office of Science and Technology Policy.
Lyons was raised in Fort Collins, Colorado. [1] She received her Bachelor's Degree from Harvard University in 2014 in Social Studies with a focus on Network Theory and Complex Systems. During her time at Harvard, she received the Thouron Award in 2012 to study for a summer at the University of Cambridge. [2] While at Harvard, she worked as a research analyst for David Gergen at the Kennedy School of Government Center for Public Policy. [3] Her senior thesis was entitled "Social Networks and Shibboleths: Gender Diversity and Stratification in Structures of Elite Corporate Leadership." [4] Following her time at Harvard, she became a Fellow with the Harvard School of Engineering and Applied Sciences based in Cape Town, South Africa. [1]
Lyons joined President Barack Obama's Office of Science and Technology Policy, which was run by the President's Science Advisor John Holdren, in 2015. In 2016, she began working for the United States Chief Technology Officer Megan Smith. During her tenure as a civil servant, her portfolio centered on machine intelligence, including AI, robotics, and intelligent transportation systems.
Lyons co-directed The White House Future of Artificial Intelligence Initiative, which engaged stakeholders—industry, academia, government employees, the international community, and the public at large—to develop a domestic policy strategy on machine intelligence. That work culminated in a report called Preparing for the Future of Artificial Intelligence, which detailed opportunities, considerations, and challenges in the field of AI. [5] [6] Highlights from the report include policy recommendations to ensure that the power of AI is channeled to advance social good and improve government operations, recommendations for regulations on AI technologies such as automated vehicles, and recommendations to develop a diverse workforce equipped to tap into the potential and tackle the challenges that will come with the AI revolution. The report was the culmination of five public workshops and a request for public comment that received 161 responses. [7]
Lyons also helped draft the December 2016 report Artificial Intelligence, Automation, and the Economy, which detailed the ways in which artificial intelligence will transform the American economy in the coming years and decades. [3] [8] The report outlined five key primary economic efforts that should be a priority for policymakers, including preparing for changes in skills demanded by the job market and the shifting of the job market as some jobs disappear while new opportunities are created.
In 2017, Lyons was recruited to lead the Partnership on AI, a research and policy organization founded by Facebook, Google, Microsoft, and a number of other technology corporations. [1] [9] The mission of the nonprofit organization is to research and provide thought leadership around the direction of AI technologies, including machine perception, learning, and automated reasoning. As stated on their website, the goals of the organization are to: (1) develop and share best practices on research and development related to AI; (2) advance public understanding; (3) provide opportunities for engagement across diverse audiences; (4) identify new efforts for the future of AI for the social good. [10] [11] Lyons has testified in front of the House of Representatives Oversight & Government Reform Committee Subcommittee on Information Technology to discuss the promise of artificial intelligence and advocate for the importance of the Partnership on AI. [12]
In her position as executive director, Lyons has advocated for the importance of diversity, equity, and inclusion in the AI workforce. In 2018, she spoke at The New York Times new work summit about why inclusion is a crucial issue to address in the fields of computer science and AI especially. [13] [14] Lyons is also a member of the Center for a New American Security's Task Force on Artificial Intelligence and Security to investigate opportunities and challenges AI poses to American security. [15] [16]
In 2022, Lyons was on the Steering Committee Members for Stanford University's 2022 AI Index Report. The report which is partnered with companies such as Google, Open AI and Open Philanthropy, evaluates the rate in which AI advancement from research, development, performance, ethics, education, policy and governance is developing. The report includes a broad range of data from academic, private and non-profit organisations in addition to self collected data and analysis. [17]
The Thouron Award is a postgraduate scholarship established in 1960 by Sir John R.H. Thouron, K.B.E., and Esther du Pont Thouron. It is one of the most prestigious and competitive academic awards in the world. It was created to strengthen the "special relationship" between the United States and the United Kingdom through educational exchange between British universities and the University of Pennsylvania. Through the programme the Thourons sought to nourish and develop Anglo-American friendship by ensuring that, in the years to come, a growing number of the leading citizens of these two countries would have a thorough understanding of their trans-Atlantic neighbours. In the years since its founding, the Thouron Award has sponsored programs of graduate study for more than 650 fellows, known as Thouron Scholars.
The ethics of artificial intelligence is the branch of the ethics of technology specific to artificial intelligence (AI) systems.
Eric Joel Horvitz is an American computer scientist, and Technical Fellow at Microsoft, where he serves as the company's first Chief Scientific Officer. He was previously the director of Microsoft Research Labs, including research centers in Redmond, WA, Cambridge, MA, New York, NY, Montreal, Canada, Cambridge, UK, and Bangalore, India.
Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. It should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with technology's grander social effects.
Mustafa Suleyman is a British artificial intelligence (AI) entrepreneur. He is the CEO of Microsoft AI, and the co-founder and former head of applied AI at DeepMind, an AI company acquired by Google. After leaving DeepMind, he co-founded Inflection AI, a machine learning and generative AI company, in 2022.
Francesca Rossi is an Italian computer scientist, currently working at the IBM Thomas J. Watson Research Center as an IBM Fellow and the IBM AI Ethics Global Leader.
Partnership on Artificial Intelligence to Benefit People and Society, otherwise known as Partnership on AI, is a nonprofit coalition committed to the responsible use of artificial intelligence. Coming into inception in September 2016, PAI grouped together members from over 90 companies and non-profits in order to explore best practice recommendations for the tech community. Since its founding, Partnership on AI has experienced plethora of change with influential moments, comprehensive principles and missions, and generating more relevancy by every passing day.
Mary-Anne Williams FTSE is the Michael J Crouch Chair for Innovation at the University of New South Wales in Sydney Australia (UNSW) based in the UNSW Business School.
Joanna Joy Bryson is professor at Hertie School in Berlin. She works on Artificial Intelligence, ethics and collaborative cognition. She has been a British citizen since 2007.
Kathy Pham is a Vietnamese American computer scientist and product management executive. She has held roles in leadership, engineering, product management, and data science at Google, IBM, the Georgia Tech Research Institute, Harris Healthcare, and served as a founding product and engineering member of the United States Digital Service (USDS) in the Executive Office of the President of the United States at The White House. Pham was the Deputy Chief Technology Officer for Product and Engineering at the Federal Trade Commission, and the inaugural Executive Director of the National AI Advisory Committee.
Meredith Whittaker is the president of the Signal Foundation and serves on its board of directors. She was formerly the Minderoo Research Professor at New York University (NYU), and the co-founder and faculty director of the AI Now Institute. She also served as a senior advisor on AI to Chair Lina Khan at the Federal Trade Commission. Whittaker was employed at Google for 13 years, where she founded Google's Open Research group and co-founded the M-Lab. In 2018, she was a core organizer of the Google Walkouts and resigned from the company in July 2019.
The AI Now Institute is an American research institute studying the social implications of artificial intelligence and policy research that addresses the concentration of power in the tech industry. AI Now has partnered with organizations such as the Distributed AI Research Institute (DAIR), Data & Society, Ada Lovelace Institute, New York University Tandon School of Engineering, New York University Center for Data Science, Partnership on AI, and the ACLU. AI Now has produced annual reports that examine the social implications of artificial intelligence. In 2021-2, AI Now’s leadership served as a Senior Advisors on AI to Chair Lina Khan at the Federal Trade Commission. Its executive director is Amba Kak.
Aimee van Wynsberghe is Alexander von Humboldt professor for "Applied Ethics of Artificial Intelligence" at the University of Bonn, Germany. As founder of the Bonn Sustainable AI Lab and director of the Institute for Science and Ethics, Aimee van Wynsberghe hosts every two years the Bonn Sustainable AI Conference.
Verity Harding is the Co-Lead of Ethics and Society at DeepMind.
Timnit Gebru is an Eritrean Ethiopian-born computer scientist who works in the fields of artificial intelligence (AI), algorithmic bias and data mining. She is an advocate for diversity in technology and co-founder of Black in AI, a community of Black researchers working in AI. She is the founder of the Distributed Artificial Intelligence Research Institute (DAIR).
Maria Virgínia Ferreira de Almeida Júdice Gamito Dignum is a Professor of Computer Science at Umeå University, and an Associated Professor at Delft University of Technology. She leads the Social and Ethical Artificial Intelligence research group. Her research and writing considers responsible AI and the development evaluation of human-agent team work, thereby aligning with Human-Centered Artificial Intelligence themes.
Rediet Abebe is an Ethiopian computer scientist working in algorithms and artificial intelligence. She is an assistant professor of computer science at the University of California, Berkeley. Previously, she was a Junior Fellow at the Harvard Society of Fellows.
Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning. For the subset of AI algorithms, the term regulation of artificial intelligence is used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally, including in the European Union. Regulation of AI is considered necessary to both encourage AI and manage associated risks, but challenging. Another emerging topic is the regulation of blockchain algorithms and is mentioned along with regulation of AI algorithms. Many countries have enacted regulations of high frequency trades, which is shifting due to technological progress into the realm of AI algorithms.
The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union and in supra-national bodies like the IEEE, OECD and others. Since 2016, a wave of AI ethics guidelines have been published in order to maintain social control over the technology. Regulation is considered necessary to both encourage AI and manage associated risks. In addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks. Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.
Digital self-determination is a multidisciplinary concept derived from the legal concept of self-determination and applied to the digital sphere, to address the unique challenges to individual and collective agency and autonomy arising with increasing digitalization of many aspects of society and daily life.