Wendell Wallach | |
---|---|
Born | Torrington, Connecticut | April 21, 1946
Nationality | American |
Education | Wesleyan University, Harvard University |
Employer(s) | Scholar at Yale University's Interdisciplinary Center for Bioethics, Senior Fellow at Carnegie Council for Ethics in International Affairs |
Website | http://wwallach.com |
Wendell Wallach [1] (born April 21, 1946) is a bioethicist and author focused on the ethics and governance of emerging technologies, in particular artificial intelligence and neuroscience. [2] [3] He is a scholar at Yale University's Interdisciplinary Center for Bioethics, [4] [1] a senior advisor to The Hastings Center, [5] and a Carnegie/Uehiro Senior Fellow at the Carnegie Council for Ethics in International Affairs, [1] where he co-directs the "Artificial Intelligence Equality Initiative" with Anja Kaspersen. [6] Wallach is also a fellow at the Center for Law and Innovation at the Sandra Day O'Connor School of Law at Arizona State University. [2] He has written two books on the ethics of emerging technologies.: [2] [1] "Moral Machines: Teaching Robots Right from Wrong" (2010) [7] and "A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control" (2015). [8] Wallach discusses his professional, personal and spiritual journey, as well as some of the biggest conundrums facing humanity at the wake of the bio/digital revolution, in this podcast published by the Carnegie Council for Ethics in International Affairs (CCEIA). [9]
Wallach was born in Torrington, Connecticut. [10] He received his Bachelor of Arts from Wesleyan University in 1968. [11] In 1971 he received his master's degree in education from Harvard University, [11] [3] and afterwards did a stint in India where he explored spirituality and processes of cognition. [11] In 1978 he published his first book, Silent Learning: The Undistracted Mind (Journey Publications, 1978). [12]
In the 1980s and 1990s, Wallach worked in computer consulting as founder and president of Farpoint Solutions LLC and Omnia Consulting Inc. [13] [2] [11] These groups served clients such as the State of Connecticut, PepsiCo International, and United Aircraft. [2] [11] He sold his interests in both companies in 2001. [11]
In 2004 and 2005, Wallach taught undergraduate seminars at Yale University about robot ethics, and in 2005 he became chair of the Technology and Ethics Study Group at Yale University ISPS Interdisciplinary Center for Bioethics. [14] In 2009, Wallach published Moral Machines: Teaching Robots Right From Wrong (co-authored with Colin Allen, Indiana University), which discusses issues in AI ethics and machine morality. [15] It is considered[ by whom? ] the "first book to examine the challenge of building artificial moral agents." [15] In 2015 Wallach became a senior advisor on synthetic biology to The Hastings Center, [16] which is an "independent, nonpartisan, interdisciplinary research institute" focused on "social and ethical issues in health care, science, and technology." [17] Wallach received the World Technology Network award for ethics in 2014. [18] He also won the World Technology Network award for media and journalism in 2015, [19] in recognition of his second book, A Dangerous Master: How to keep technology from slipping beyond our control, [20] which discusses the ethics and governance of various emerging technologies. [20] In this book, Wallach argues that "technological development is at risk of becoming a juggernaut beyond human control," and proposes "solutions for regaining control of our technological destiny." [20] In 2015, he received a grant from Elon Musk and the Future of Life Institute for a project titled "Control and Responsible Innovation in the Development of Autonomous Machines". [21] [22]
Wallach is the editor of the Library of Essays on Ethics and Emerging Technologies, [23] [24] where he co-edited a volume on Robot Ethics and Machine Ethics with Peter Asaro, [25] and a volume Emerging Technologies: Ethics, Law, and Governance with Gary Marchant. [26] He received a Fulbright Scholarship as a Visiting Research Chair at the University of Ottawa for 2015–2016, [27] and in 2018 he was named the Distinguished Austin J. Fagothey Visiting professor at Santa Clara University. [28] Wallach was appointed by the World Economic Forum (WEF) to co-chair the Global Future Council on Technology, Values, and Policy for the 2016–2018 term. [29] He also sits on the WEF AI council (2018–present), [30] and is the lead organizer for the International Congress for the Governance of AI. [31]
In 2016, Wallach gave testimony at the United Nations (UN) Third Convention on Certain Conventional Weapons (CCW) Meeting of Experts on the issue of predictability in lethal autonomous weapons systems, [32] The testimony argued that "while increasing autonomy, improving intelligence, and machine learning can boost the system's accuracy in performing certain tasks, they can also increase the unpredictability in how a system performs overall. Risk will rise relative to the power of the munitions the system can discharge." [32] He later served as a member of the UN Global Pulse Expert Group on Governance and Data of AI in 2019, which called for responsible development of artificial intelligence and other emerging technologies to reach the UN's 2030 Sustainable Development Goals. [33] In addition, he served as an advisor to the Secretary General's Higher-Level Panel on Digital Cooperation, and was cited in their 2019 report "The Age of Digital Interdependence." [34]
Wallach is married to Nancy Wallach, and they live in Bloomfield, Connecticut. [2] His hobbies include skiing, hiking, and building stained glass windows. [2] [3]
Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.
Dabbala Rajagopal "Raj" Reddy is an Indian-American computer scientist and a winner of the Turing Award. He is one of the early pioneers of artificial intelligence and has served on the faculty of Stanford and Carnegie Mellon for over 50 years. He was the founding director of the Robotics Institute at Carnegie Mellon University. He was instrumental in helping to create Rajiv Gandhi University of Knowledge Technologies in India, to cater to the educational needs of the low-income, gifted, rural youth. He was the founding chairman of International Institute of Information Technology, Hyderabad. He is the first person of Asian origin to receive the Turing Award, in 1994, known as the Nobel Prize of Computer Science, for his work in the field of artificial intelligence.
Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.
Robot ethics, sometimes known as "roboethics", concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic, and how robots should be designed such that they act 'ethically'. Alternatively, roboethics refers specifically to the ethics of human behavior towards robots, as robots become increasingly advanced. Robot ethics is a sub-field of ethics of technology, specifically information technology, and it has close links to legal as well as socio-economic concerns. Researchers from diverse areas are beginning to tackle ethical questions about creating robotic technology and implementing it in societies, in a way that will still ensure the safety of the human race.
Ben Goertzel is a computer scientist, artificial intelligence researcher, and businessman. He helped popularize the term 'artificial general intelligence'.
The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.
Manuela Maria Veloso is the Head of J.P. Morgan AI Research & Herbert A. Simon University Professor Emeritus in the School of Computer Science at Carnegie Mellon University, where she was previously Head of the Machine Learning Department. She served as president of Association for the Advancement of Artificial Intelligence (AAAI) until 2014, and the co-founder and a Past President of the RoboCup Federation. She is a fellow of AAAI, Institute of Electrical and Electronics Engineers (IEEE), American Association for the Advancement of Science (AAAS), and Association for Computing Machinery (ACM). She is an international expert in artificial intelligence and robotics.
Eric Joel Horvitz is an American computer scientist, and Technical Fellow at Microsoft, where he serves as the company's first Chief Scientific Officer. He was previously the director of Microsoft Research Labs, including research centers in Redmond, WA, Cambridge, MA, New York, NY, Montreal, Canada, Cambridge, UK, and Bangalore, India.
Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. It should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with technology's grander social effects.
Peter Stone is an American computer scientist who holds the Truchard Foundation Chair of Computer Science at The University of Texas at Austin. He is also Chief Scientist of Sony AI, an Alfred P. Sloan Research Fellow, Guggenheim Fellow, AAAI Fellow, IEEE Fellow, AAAS Fellow, ACM Fellow, and Fulbright Scholar.
Robotic governance provides a regulatory framework to deal with autonomous and intelligent machines. This includes research and development activities as well as handling of these machines. The idea is related to the concepts of corporate governance, technology governance and IT-governance, which provide a framework for the management of organizations or the focus of a global IT infrastructure.
A military artificial intelligence arms race is an arms race between two or more states to develop and deploy lethal autonomous weapons systems (LAWS). Since the mid-2010s, many analysts have noted the emergence of such an arms race between superpowers for better military AI, driven by increasing geopolitical and military tensions.
Aimee van Wynsberghe is Alexander von Humboldt professor for "Applied Ethics of Artificial Intelligence" at the University of Bonn, Germany. As founder of the Bonn Sustainable AI Lab and director of the Institute for Science and Ethics, Aimee van Wynsberghe hosts every two years the Bonn Sustainable AI Conference.
Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning. For the subset of AI algorithms, the term regulation of artificial intelligence is used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally, including in the European Union. Regulation of AI is considered necessary to both encourage AI and manage associated risks, but challenging. Another emerging topic is the regulation of blockchain algorithms and is mentioned along with regulation of AI algorithms. Many countries have enacted regulations of high frequency trades, which is shifting due to technological progress into the realm of AI algorithms.
Regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI). It is part of the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct enforcement power like the IEEE or the OECD.
Mariarosaria Taddeo is an Italian philosopher working on the ethics of digital technologies. She is Professor of Digital Ethics and Defence Technologies at the Oxford Internet Institute, University of Oxford and Dslt Ethics Fellow at the Alan Turing Institute, London.
Ajung Moon is a Korean-Canadian experimental roboticist specializing in ethics and responsible design of interactive robots and autonomous intelligent systems. She is an assistant professor of electrical and computer engineering at McGill University and the Director of the McGill Responsible Autonomy & Intelligent System Ethics (RAISE) lab. Her research interests lie in human-robot interaction, AI ethics, and robot ethics.
Anja Kaspersen is a director for Global Markets Development, New Frontiers and Emerging Spaces at IEEE, the world's largest technical professional organisation. Kaspersen is also a senior fellow at Carnegie Council for Ethics in International Affairs where she co-directs the Artificial Intelligence Equality Initiative with Wendell Wallach. With scholars and thinkers in the field of technology governance, supported by Carnegie Council for Ethics in International Affairs and IEEE, Kaspersen and Wallach provided a Proposal for International governance of AI.
Kay Firth-Butterfield is a lawyer, professor, and author specializing in the intersection of artificial intelligence, international relations, Business and AI ethics. She is the CEO of the Centre for Trustworthy Technology which is a Member of the World Economic Forum's Forth Industrial Revolution Network. Before starting her new position Kay was the head of AI and machine learning at the World Economic Forum. She was an adjunct professor of law at the University of Texas at Austin.
Alan Winfield is a British engineer and educator. He is Professor of Robot Ethics at UWE Bristol, Honorary Professor at the University of York, and Associate Fellow in the Cambridge Centre for the Future of Intelligence. He chairs the advisory board of the Responsible Technology Institute, University of Oxford.