Ajung Moon | |
---|---|
Born | South Korea |
Education | University of British Columbia |
Occupation(s) | Assistant professor, experimental roboticist |
Employer | McGill University |
Ajung Moon is a Korean-Canadian experimental roboticist [1] specializing in ethics and responsible design of interactive robots and autonomous intelligent systems. She is an assistant professor of electrical and computer engineering at McGill University and the Director of the McGill Responsible Autonomy & Intelligent System Ethics (RAISE) lab. Her research interests lie in human-robot interaction, AI ethics, and robot ethics. [2]
Prior to joining McGill University, she served as a senior advisor for the UN Secretary-General’s High-level Panel on Digital Cooperation and ran a start-up AI ethics consultancy, Generation R Consulting. [3] She also founded the nonprofit Open Roboethics Institute. She currently serves on the Government of Canada Advisory Council on Artificial Intelligence among others. [4]
Originally from Gyeongsangnam-do, South Korea, Moon received her Doctor of Philosophy in Mechanical Engineering at the University of British Columbia in 2014, [5] focusing on human-robot interaction and robot ethics. According to Moon, relationships between humans and machines will need to consider the potential conflicts between the two entities. She argues, people naturally negotiate for solutions. However, robots do not have the ability to understand morals and how this will weigh into how they negotiate and how decisions are made. In 2012, she completed her M.A.Sc. thesis at the University of British Columbia: What Should a Robot Do?: Design and Implementation of Human-like Hesitation Gestures as a Response Mechanism for Human-robot Resource Conflicts, [6] Her Ph.D. research thesis focused on the "interactive paradigm of human-robot conflict resolution". [7]
Moon is currently an assistant professor at the Department of Electrical and Computer Engineering at McGill University. [8]
Moon has collaborated on the following projects:
Military robots are autonomous robots or remote-controlled mobile robots designed for military applications, from transport to search & rescue and attack.
Human–robot interaction (HRI) is the study of interactions between humans and robots. Human–robot interaction is a multidisciplinary field with contributions from human–computer interaction, artificial intelligence, robotics, natural language processing, design, psychology and philosophy. A subfield known as physical human–robot interaction (pHRI) has tended to focus on device design to enable people to safely interact with robotic systems.
Robot ethics, sometimes known as "roboethics", concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic, and how robots should be designed such that they act 'ethically'. Alternatively, roboethics refers specifically to the ethics of human behavior towards robots, as robots become increasingly advanced. Robot ethics is a sub-field of ethics of technology, specifically information technology, and it has close links to legal as well as socio-economic concerns. Researchers from diverse areas are beginning to tackle ethical questions about creating robotic technology and implementing it in societies, in a way that will still ensure the safety of the human race.
Ekaterini Panagiotou Sycara is a Greek computer scientist. She is an Edward Fredkin Research Professor of Robotics in the Robotics Institute, School of Computer Science at Carnegie Mellon University internationally known for her research in artificial intelligence, particularly in the fields of negotiation, autonomous agents and multi-agent systems. She directs the Advanced Agent-Robotics Technology Lab at Robotics Institute, Carnegie Mellon University. She also serves as academic advisor for PhD students at both Robotics Institute and Tepper School of Business.
The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.
Manuela Maria Veloso is the Head of J.P. Morgan AI Research & Herbert A. Simon University Professor Emeritus in the School of Computer Science at Carnegie Mellon University, where she was previously Head of the Machine Learning Department. She served as president of Association for the Advancement of Artificial Intelligence (AAAI) until 2014, and the co-founder and a Past President of the RoboCup Federation. She is a fellow of AAAI, Institute of Electrical and Electronics Engineers (IEEE), American Association for the Advancement of Science (AAAS), and Association for Computing Machinery (ACM). She is an international expert in artificial intelligence and robotics.
George A. Bekey was an American roboticist and the professor of Computer Science, Electrical Engineering and Biomedical Engineering at the University of Southern California.
Robotics is the interdisciplinary study and practice of the design, construction, operation, and use of robots.
The following outline is provided as an overview of and topical guide to robotics:
Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. It should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with technology's grander social effects.
A cobot, or collaborative robot, also known as a companion robot, is a robot intended for direct human-robot interaction within a shared space, or where humans and robots are in close proximity. Cobot applications contrast with traditional industrial robot applications in which robots are isolated from human contact or the humans are protected by robotic tech vests. Cobot safety may rely on lightweight construction materials, rounded edges, and inherent limitation of speed and force, or on sensors and software that ensure safe behavior.
Lethal autonomous weapons (LAWs) are a type of autonomous military system that can independently search for and engage targets based on programmed constraints and descriptions. LAWs are also known as lethal autonomous weapon systems (LAWS), autonomous weapon systems (AWS), robotic weapons or killer robots. LAWs may operate in the air, on land, on water, underwater, or in space. The autonomy of systems as of 2018 was restricted in the sense that a human gives the final command to attack—though there are exceptions with certain "defensive" systems.
Raffaello D’Andrea is a Canadian-Italian-Swiss engineer, artist, and entrepreneur. He is professor of dynamic systems and control at ETH Zurich. He is a co-founder of Kiva Systems, and the founder of Verity, an innovator in autonomous drones. He was the faculty advisor and system architect of the Cornell Robot Soccer Team, four time world champions at the annual RoboCup competition. He is a new media artist, whose work includes The Table, the Robotic Chair, and Flight Assembled Architecture. In 2013, D’Andrea co-founded ROBO Global, which launched the world's first exchange traded fund focused entirely on the theme of robotics and AI. ROBO Global was acquired by VettaFi in 2023.
Air-Cobot (Aircraft Inspection enhanced by smaRt & Collaborative rOBOT) is a French research and development project of a wheeled collaborative mobile robot able to inspect aircraft during maintenance operations. This multi-partner project involves research laboratories and industry. Research around this prototype was developed in three domains: autonomous navigation, human-robot collaboration and nondestructive testing.
Swarm robotic platforms apply swarm robotics in multi-robot collaboration. They take inspiration from nature. The main goal is to control a large number of robots to accomplish a common task/problem. Hardware limitation and cost of robot platforms limit current research in swarm robotics to mostly performed by simulation software. On the other hand, simulation of swarm scenarios that needs large numbers of agents is extremely complex and often inaccurate due to poor modelling of external conditions and limitation of computation.
Mary-Anne Williams FTSE is the Michael J Crouch Chair for Innovation at the University of New South Wales in Sydney Australia (UNSW), based in the UNSW Business School.
Joanna Joy Bryson is professor at Hertie School in Berlin. She works on Artificial Intelligence, ethics and collaborative cognition. She has been a British citizen since 2007.
Aude G. Billard is a Swiss physicist in the fields of machine learning and human-robot interactions. As a full professor at the School of Engineering at Swiss Federal Institute of Technology in Lausanne (EPFL), Billard’s research focuses on applying machine learning to support robot learning through human guidance. Billard’s work on human-robot interactions has been recognized numerous times by the Institute of Electrical and Electronics Engineers (IEEE) and she currently holds a leadership position on the executive committee of the IEEE Robotics and Automation Society (RAS) as the vice president of publication activities.
Carlotta Berry is an American academic in the field of engineering. She is a professor of electrical and computer engineering at Rose-Hulman Institute of Technology. She is co-director of the Rose Building Undergraduate Diversity (ROSE-BUD) program. She is a co-founder of Black In Engineering and a co-founder of Black In Robotics.
Wendell Wallach is a bioethicist and author focused on the ethics and governance of emerging technologies, in particular artificial intelligence and neuroscience. He is a scholar at Yale University's Interdisciplinary Center for Bioethics, a senior advisor to The Hastings Center, and a Carnegie/Uehiro Senior Fellow at the Carnegie Council for Ethics in International Affairs, where he co-directs the "Artificial Intelligence Equality Initiative" with Anja Kaspersen. Wallach is also a fellow at the Center for Law and Innovation at the Sandra Day O'Connor School of Law at Arizona State University. He has written two books on the ethics of emerging technologies.: "Moral Machines: Teaching Robots Right from Wrong" (2010) and "A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control" (2015). Wallach discusses his professional, personal and spiritual journey, as well as some of the biggest conundrums facing humanity at the wake of the bio/digital revolution, in this podcast published by the Carnegie Council for Ethics in International Affairs (CCEIA).