Terah Lyons

Last updated
Terah Lyons
Terah Lyons AI for Good Global Summit 2018.jpg
AI for Good Global Summit 2018
Born
Palm Springs, California
Education Harvard University
OccupationExecutive Director
Employer(s) Partnership on AI
Office of Science and Technology Policy, 2015–2017
Known for Artificial Intelligence, Technology Policy
Awards Thouron Award
Website terahlyons.com

Terah Lyons is known for her expertise in the field of artificial intelligence and technology policy. Lyons was the executive director of the Partnership on AI and was a policy advisor to the United States Chief Technology Officer Megan Smith in President Barack Obama's Office of Science and Technology Policy.

Contents

Education and early career

Lyons was raised in Fort Collins, Colorado. [1] She received her Bachelor's Degree from Harvard University in 2014 in Social Studies with a focus on Network Theory and Complex Systems. During her time at Harvard, she received the Thouron Award in 2012 to study for a summer at the University of Cambridge. [2] While at Harvard, she worked as a research analyst for David Gergen at the Kennedy School of Government Center for Public Policy. [3] Her senior thesis was entitled "Social Networks and Shibboleths: Gender Diversity and Stratification in Structures of Elite Corporate Leadership." [4] Following her time at Harvard, she became a Fellow with the Harvard School of Engineering and Applied Sciences based in Cape Town, South Africa. [1]

Public service career

Lyons joined President Barack Obama's Office of Science and Technology Policy, which was run by the President's Science Advisor John Holdren, in 2015. In 2016, she began working for the United States Chief Technology Officer Megan Smith. During her tenure as a civil servant, her portfolio centered on machine intelligence, including AI, robotics, and intelligent transportation systems.

Lyons co-directed The White House Future of Artificial Intelligence Initiative, which engaged stakeholders—industry, academia, government employees, the international community, and the public at large—to develop a domestic policy strategy on machine intelligence. That work culminated in a report called Preparing for the Future of Artificial Intelligence, which detailed opportunities, considerations, and challenges in the field of AI. [5] [6] Highlights from the report include policy recommendations to ensure that the power of AI is channeled to advance social good and improve government operations, recommendations for regulations on AI technologies such as automated vehicles, and recommendations to develop a diverse workforce equipped to tap into the potential and tackle the challenges that will come with the AI revolution. The report was the culmination of five public workshops and a request for public comment that received 161 responses. [7]

Lyons also helped draft the December 2016 report Artificial Intelligence, Automation, and the Economy, which detailed the ways in which artificial intelligence will transform the American economy in the coming years and decades. [3] [8] The report outlined five key primary economic efforts that should be a priority for policymakers, including preparing for changes in skills demanded by the job market and the shifting of the job market as some jobs disappear while new opportunities are created.

The Partnership on AI

In 2017, Lyons was recruited to lead the Partnership on AI, a research and policy organization founded by Facebook, Google, Microsoft, and a number of other technology corporations. [1] [9] The mission of the nonprofit organization is to research and provide thought leadership around the direction of AI technologies, including machine perception, learning, and automated reasoning. As stated on their website, the goals of the organization are to: (1) develop and share best practices on research and development related to AI; (2) advance public understanding; (3) provide opportunities for engagement across diverse audiences; (4) identify new efforts for the future of AI for the social good. [10] [11] Lyons has testified in front of the House of Representatives Oversight & Government Reform Committee Subcommittee on Information Technology to discuss the promise of artificial intelligence and advocate for the importance of the Partnership on AI. [12]

In her position as executive director, Lyons has advocated for the importance of diversity, equity, and inclusion in the AI workforce. In 2018, she spoke at The New York Times new work summit about why inclusion is a crucial issue to address in the fields of computer science and AI especially. [13] [14] Lyons is also a member of the Center for a New American Security's Task Force on Artificial Intelligence and Security to investigate opportunities and challenges AI poses to American security. [15] [16]

In 2022, Lyons was on the Steering Committee Members for Stanford University's 2022 AI Index Report. The report which is partnered with companies such as Google, Open AI and Open Philanthropy, evaluates the rate in which AI advancement from research, development, performance, ethics, education, policy and governance is developing. The report includes a broad range of data from academic, private and non-profit organisations in addition to self collected data and analysis. [17]

Awards and recognition

Related Research Articles

Robot ethics, sometimes known as "roboethics", concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic, and how robots should be designed such that they act 'ethically'. Alternatively, roboethics refers specifically to the ethics of human behavior towards robots, as robots become increasingly advanced. Robot ethics is a sub-field of ethics of technology, specifically information technology, and it has close links to legal as well as socio-economic concerns. Researchers from diverse areas are beginning to tackle ethical questions about creating robotic technology and implementing it in societies, in a way that will still ensure the safety of the human race.

The Thouron Award is a postgraduate scholarship established in 1960 by Sir John R.H. Thouron, K.B.E., and Esther du Pont Thouron. It was created to strengthen the "special relationship" between the United States and the United Kingdom through educational exchange between British universities and the University of Pennsylvania. Through the programme the Thourons sought to nourish and develop Anglo-American friendship by ensuring that, in the years to come, a growing number of the leading citizens of these two countries would have a thorough understanding of their trans-Atlantic neighbours. In the years since its founding, the Thouron Award has sponsored programs of graduate study for more than 650 fellows, known as Thouron Scholars.

The ethics of artificial intelligence is the branch of the ethics of technology specific to artificially intelligent systems. It is sometimes divided into a concern with the moral behavior of humans as they design, make, use and treat artificially intelligent systems, and a concern with the behavior of machines, in machine ethics.

<span class="mw-page-title-main">Eric Horvitz</span> American computer scientist, and Technical Fellow at Microsoft

Eric Joel Horvitz is an American computer scientist, and Technical Fellow at Microsoft, where he serves as the company's first Chief Scientific Officer. He was previously the director of Microsoft Research Labs, including research centers in Redmond, WA, Cambridge, MA, New York, NY, Montreal, Canada, Cambridge, UK, and Bangalore, India.

Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. Machine ethics should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with the grander social effects of technology.

<span class="mw-page-title-main">Mustafa Suleyman</span> British entrepreneur and activist

Mustafa Suleyman is a British artificial intelligence (AI) researcher and entrepreneur. Suleyman was the co-founder and former head of applied AI at DeepMind, an AI company acquired by Google. After leaving DeepMind, he co-founded Inflection AI, a machine learning and generative AI company, in 2022.

<span class="mw-page-title-main">Francesca Rossi</span> Italian computer scientist

Francesca Rossi is an Italian computer scientist, currently working at the IBM Thomas J. Watson Research Center as an IBM Fellow and the IBM AI Ethics Global Leader.

<span class="mw-page-title-main">Partnership on AI</span> Nonprofit coalition

Partnership on Artificial Intelligence to Benefit People and Society, otherwise known as Partnership on AI, is a nonprofit coalition committed to the responsible use of artificial intelligence. Coming into inception in September 2016, PAI grouped together members from over 90 companies and non-profits in order to explore best practice recommendations for the tech community. Since its founding, Partnership on AI has experienced plethora of change with influential moments, comprehensive principles and missions, and generating more relevancy by every passing day.

<span class="mw-page-title-main">Mary-Anne Williams</span> Australian professor at UNSW founded Artificial Intelligence programs

Mary-Anne Williams FTSE is the Michael J Crouch Chair for Innovation at the University of New South Wales in Sydney Australia (UNSW) based in the UNSW Business School.

<span class="mw-page-title-main">Joanna Bryson</span> Researcher and Professor of Ethics and Technology

Joanna Joy Bryson is professor at Hertie School in Berlin. She works on Artificial Intelligence, ethics and collaborative cognition. She has been a British citizen since 2007.

<span class="mw-page-title-main">Kathy Pham</span> Vietnamese American computer scientist

Kathy Pham is a Vietnamese American computer scientist and product management executive. She has held roles in leadership, engineering, product management, and data science at Google, IBM, the Georgia Tech Research Institute, Harris Healthcare, and served as a founding product and engineering member of the United States Digital Service (USDS) in the Executive Office of the President of the United States at The White House. Pham was the Deputy Chief Technology Officer for Product and Engineering at the Federal Trade Commission, and the inaugural Executive Director of the National AI Advisory Committee.

<span class="mw-page-title-main">Meredith Whittaker</span> American artificial intelligence research scientist

Meredith Whittaker is the president of the Signal Foundation and serves on its board of directors. She was formerly the Minderoo Research Professor at New York University (NYU), and the co-founder and faculty director of the AI Now Institute. She also served as a senior advisor on AI to Chair Lina Khan at the Federal Trade Commission. Whittaker was employed at Google for 13 years, where she founded Google's Open Research group and co-founded the M-Lab. In 2018, she was a core organizer of the Google Walkouts and resigned from the company in July 2019.

The AI Now Institute is an American research institute studying the social implications of artificial intelligence and policy research that addresses the concentration of power in the tech industry. AI Now has partnered with organizations such as the Distributed AI Research Institute (DAIR), Data & Society, Ada Lovelace Institute, New York University Tandon School of Engineering, New York University Center for Data Science, Partnership on AI, and the ACLU. AI Now has produced annual reports that examine the social implications of artificial intelligence. In 2021-2, AI Now’s leadership served as a Senior Advisors on AI to Chair Lina Khan at the Federal Trade Commission. Its executive director is Amba Kak.

<span class="mw-page-title-main">Aimee Van Wynsberghe</span> AI ethics researcher

Aimee van Wynsberghe is Alexander von Humboldt professor for "Applied Ethics of Artificial Intelligence" at the University of Bonn, Germany. As founder of the Bonn Sustainable AI Lab and director of the Institute for Science and Ethics, Aimee van Wynsberghe hosts every two years the Bonn Sustainable AI Conference.

Verity Harding is the Co-Lead of Ethics and Society at DeepMind.

<span class="mw-page-title-main">Virginia Dignum</span> Computer scientist and artificial intelligence researcher

Maria Virgínia Ferreira de Almeida Júdice Gamito Dignum is a Professor of Computer Science at Umeå University, and an Associated Professor at Delft University of Technology. She leads the Social and Ethical Artificial Intelligence research group. Her research and writing considers responsible AI and the development evaluation of human-agent team work, thereby aligning with Human-Centered Artificial Intelligence themes.

Rediet Abebe is an Ethiopian computer scientist working in algorithms and artificial intelligence. She is an assistant professor of computer science at the University of California, Berkeley. Previously, she was a Junior Fellow at the Harvard Society of Fellows.

Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning. For the subset of AI algorithms, the term regulation of artificial intelligence is used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally, including in the European Union. Regulation of AI is considered necessary to both encourage AI and manage associated risks, but challenging. Another emerging topic is the regulation of blockchain algorithms and is mentioned along with regulation of AI algorithms. Many countries have enacted regulations of high frequency trades, which is shifting due to technological progress into the realm of AI algorithms.

The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union and in supra-national bodies like the IEEE, OECD and others. Since 2016, a wave of AI ethics guidelines have been published in order to maintain social control over the technology. Regulation is considered necessary to both encourage AI and manage associated risks. In addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks. Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.

Cansu Canca is a moral and political philosopher, with a Ph.D. specializing in applied ethics, and founder and director of AI Ethics Lab. Formerly, she was a bioethicist at the University of Hong Kong, and an ethics researcher at Harvard Law School, Harvard School of Public Health, Harvard Medical School, National University of Singapore, Osaka University, and the World Health Organization.

References

  1. 1 2 3 Manyika, Sarah Ládípọ̀. "Is She a Superhero for Artificial Intelligence?". OZY. Retrieved 2018-09-10.
  2. 1 2 "Alumni US | Harvard University (2014)". alumnius.net. Retrieved 2018-09-10.
  3. 1 2 "Q&A With Terah Lyons, Former White House Policy Advisor on AI". [:]. Retrieved 2018-09-10.
  4. "Fall 2013 Senior Thesis Titles. Spring 2014 Senior Thesis Titles - PDF". docplayer.net. Retrieved 2018-09-10.
  5. "The Administration's Report on the Future of Artificial Intelligence". whitehouse.gov. 2016-10-12. Retrieved 2018-09-10.
  6. Preparing for the Future of Artificial Intelligence (PDF) (Report). Executive Office of the President of the United States. October 2016.
  7. Smith, Megan (2016-09-06). "Public Input and Next Steps on the Future of Artificial Intelligence". Medium. Retrieved 2018-09-10.
  8. "Artificial Intelligence, Automation, and the Economy". whitehouse.gov. 2016-12-20. Retrieved 2018-09-10.
  9. "Partnership on AI Announces Executive Director Terah Lyons and Welcomes New Partners - The Partnership on AI". The Partnership on AI. 2017-10-17. Retrieved 2018-09-10.
  10. "About - The Partnership on AI". The Partnership on AI. Retrieved 2018-09-10.
  11. "22 companies join Partnership on AI, begin to study AI's impact on work and society". TechRepublic. Retrieved 2018-09-10.
  12. Lyons, Terah (April 18, 2018). Written Testimony for Game Changers: Artificial Intelligence Part III – AI and Public Policy (PDF) (Report). House of Representatives Oversight & Government Reform Committee Subcommittee on Information Technology.
  13. TIMES, THE NEW YORK (20 February 2018). "Terah Lyons on A.I.'s "Dismal" Diversity". NYTimes.com - Video. Retrieved 2018-09-10.
  14. Snow, Jackie. "For better AI, diversify the people building it". MIT Technology Review. Retrieved 2018-09-10.
  15. "Washington waking up to threats of AI with new task force". TechCrunch. Retrieved 2018-09-10.
  16. Metz, Cade (2018-03-15). "Pentagon Wants Silicon Valley's Help on A.I." The New York Times. Retrieved 2018-09-10.
  17. "Announcing the 2022 AI Index Report". /hai.stanford.edu. Retrieved 21 September 2022.
  18. "Increasing Momentum Around Tech Policy – Mozilla Press Center". Mozilla Press Center. 8 June 2017. Retrieved 2018-09-10.
  19. "Hall of Fame". 100 Brilliant Women in AI Ethics™. Retrieved 2021-05-13.