Joanna Joy Bryson | |
---|---|
Born | 1965 (age 57–58) |
Known for | Artificial Intelligence |
Academic background | |
Education | University of Chicago University of Edinburgh MIT |
Thesis | Intelligence By Design: Principles of Modularity and Coordination for Engineering Complex Adaptive Agents (2001) |
Doctoral advisor | Lynn Andrea Stein |
Other advisors | Marc Hauser |
Academic work | |
Institutions | Lego University of Bath Hertie School |
Website | www |
Joanna Joy Bryson (born 1965) is professor at Hertie School in Berlin. She works on Artificial Intelligence,ethics and collaborative cognition. She has been a British citizen since 2007.
Bryson attended Glenbard North High School and graduated in 1982. [1] She studied Behavioural Science at the University of Chicago,graduating with an AB in 1986. [2] In 1991 she moved to the University of Edinburgh where she completed an MSc in Artificial Intelligence before an MPhil in Psychology. [3] Bryson moved to MIT to complete her PhD,earning a doctorate under Lynn Andrea Stein in 2001 for her thesis "Intelligence by Design:Principles of Modularity and Coordination for Engineering Complex Adaptive Agents". [4] In 1995 she worked for LEGO Futura in Boston,and then in 1998 she worked for LEGO Digital as an AI consultant with Kristinn R. Thórisson on cognitive architectures for autonomous LEGO characters in the Wizard Group. She completed a postdoctoral fellowship in Marc Hauser's Primate Cognitive Neuroscience at the Harvard University in 2002. [5]
Bryson joined the Department of Computer Science at the University of Bath in 2002. [6] At Bath,Bryson founded the Intelligent Systems research group. [7] [8] In 2007 she joined the University of Nottingham as a visiting research fellow in the Methods and Data Institute. [9] During this time,she was a Hans Przibram Fellow at the Konrad Lorenz Institute for Evolution and Cognition. [9] She joined Oxford University as a visiting research fellow in 2010,working with Harvey Whitehouse on the impact of religion on societies. [9] [10]
In 2010 Bryson published Robots Should Be Slaves,which selected as a chapter in Yorick Wilks' "Close Engagements with Artificial Companions:Key Social,Psychological,Ethical and Design Issues". [11] [12] She helped the EPSRC to define the Principles of Robotics in 2010. [13] In 2015 she was a Visiting Academic at the University of Princeton Center for Information Technology Policy,where she remained an affiliate through 2018. [14] She is focussed on "Standardizing Ethical Design for Artificial Intelligence and Autonomous Systems". [15] In 2020 she became Professor of Ethics and Technology at Hertie School of Governance in Berlin. [16]
Bryson's research has appeared in Science and on Reddit. [17] [18] She has consulted The Red Cross on autonomous weapons and contributed to an All Party Parliamentary Group on Artificial Intelligence. [19]
In 2022,Bryson published an article for Wired magazine titled "One Day,AI Will Seem as Human as Anyone. What Then?". In the article she discussed the current limits of and future of Ai,how the general public define and think about AI,and how AI interacts with people via Language and touches upon the topics of natural language processing,ethics and Human-computer interaction. Bryson also dissusses the recent EU AI Act. [20]
In 2017,Bryson won an Outstanding Achievement award from Cognition X. [21] She regularly appears in national media,talking about human-robot relationships and the ethics of AI. [22] [23]
An AI takeover is a hypothetical scenario in which an artificial intelligence (AI) becomes the dominant form of intelligence on Earth,as computer programs or robots effectively take the control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce,takeover by a superintelligent AI,and the popular notion of a robot uprising. Stories of AI takeovers are very popular throughout science-fiction. Some public figures,such as Stephen Hawking and Elon Musk,have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.
Laws of robotics are any set of laws,rules,or principles,which are intended as a fundamental framework to underpin the behavior of robots designed to have a degree of autonomy. Robots of this degree of complexity do not yet exist,but they have been widely anticipated in science fiction,films and are a topic of active research and development in the fields of robotics and artificial intelligence.
Noel Sharkey is a computer scientist born in Belfast,Northern Ireland. He is best known to the British public for his appearances on television as an expert on robotics;including the BBC Two television series Robot Wars and Techno Games,and co-hosting Bright Sparks for BBC Northern Ireland. He is emeritus professor of artificial intelligence and robotics at the University of Sheffield.
Robot ethics,sometimes known as "roboethics",concerns ethical problems that occur with robots,such as whether robots pose a threat to humans in the long or short run,whether some uses of robots are problematic,and how robots should be designed such that they act 'ethically'. Alternatively,roboethics refers specifically to the ethics of human behavior towards robots,as robots become increasingly advanced. Robot ethics is a sub-field of ethics of technology,specifically information technology,and it has close links to legal as well as socio-economic concerns. Researchers from diverse areas are beginning to tackle ethical questions about creating robotic technology and implementing it in societies,in a way that will still ensure the safety of the human race.
The ethics of artificial intelligence is the branch of the ethics of technology specific to artificially intelligent systems. It is sometimes divided into a concern with the moral behavior of humans as they design,make,use and treat artificially intelligent systems,and a concern with the behavior of machines, in machine ethics. It also includes the issue of a possible singularity due to superintelligent AI.
Manuela Maria Veloso is the Head of J.P. Morgan AI Research &Herbert A. Simon University Professor in the School of Computer Science at Carnegie Mellon University,where she was previously Head of the Machine Learning Department. She served as president of Association for the Advancement of Artificial Intelligence (AAAI) until 2014,and the co-founder and a Past President of the RoboCup Federation. She is a fellow of AAAI,Institute of Electrical and Electronics Engineers (IEEE),American Association for the Advancement of Science (AAAS),and Association for Computing Machinery (ACM). She is an international expert in artificial intelligence and robotics.
Dr Blay Whitby is a philosopher and technology ethicist,specialising in computer science,artificial intelligence and robotics. He is based at the University of Sussex,England.
Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence,otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. Machine ethics should not be confused with computer ethics,which focuses on human use of computers. It should also be distinguished from the philosophy of technology,which concerns itself with the grander social effects of technology.
Lethal autonomous weapons (LAWs) are a type of autonomous military system that can independently search for and engage targets based on programmed constraints and descriptions. LAWs are also known as lethal autonomous weapon systems (LAWS),autonomous weapon systems (AWS),robotic weapons or killer robots. LAWs may operate in the air,on land,on water,underwater,or in space. The autonomy of current systems as of 2018 was restricted in the sense that a human gives the final command to attack—though there are exceptions with certain "defensive" systems.
Nicholas Robert Jennings is a British computer scientist and the current Vice-Chancellor and President of Loughborough University. He was previously the Vice-Provost for Research and Enterprise at Imperial College London,the UK's first Regius Professor of Computer Science,and the inaugural Chief Scientific Adviser to the UK Government on National Security. His research covers the areas of AI,autonomous systems,agent-based computing and cybersecurity. He is involved in a number of startups including Aerogility,Contact Engine,Crossword Cyber Security,and Reliance Cyber Science. He is also an adviser to Darktrace,a member of the UK Government's AI Council,chair of the National Engineering Policy Centre and a council member for the Engineering and Physical Sciences Research Council.
Shannon Vallor is a philosopher of technology. She is the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute. She was at Santa Clara University in Santa Clara,California where she was the Regis and Dianne McKenna Professor of Philosophy at SCU.
Kate Devlin,born Adela Katharine Devlin is a British computer scientist specialising in Artificial intelligence and Human–computer interaction (HCI). She is best known for her work on human sexuality and robotics and was co-chair of the annual Love and Sex With Robots convention in 2016 held in London and was founder of the UK's first ever sex tech hackathon held in 2016 at Goldsmiths,University of London. She is Senior Lecturer in Social and Cultural Artificial Intelligence in the Department of Digital Humanities,King's College London and is the author of Turned On:Science,Sex and Robots in addition to several academic papers.
Aimee van Wynsberghe is an AI ethicist at the University of Bonn in Bonn,Germany. She is also the president and co-founder of the Foundation for Responsible Robotics,a not-for-profit NGO that advocates for the ethical design and production of robots.
Marina Denise Anne Jirotka is professor of human-centered computing at the University of Oxford,director of the Responsible Technology Institute,governing body fellow at St Cross College,board member of the Society for Computers and Law and a research associate at the Oxford Internet Institute. She leads a team that works on responsible innovation,in a range of ICT fields including robotics,AI,machine learning,quantum computing,social media and the digital economy. She is known for her work on the 'Ethical Black Box',a proposal that robots using AI should be fitted with a type of inflight recorder,similar to those used by aircraft,to track the decisions and actions of the AI when operating in an uncontrolled environment and to aid in post-accident investigations.
Maria Virgínia Ferreira de Almeida Júdice Gamito Dignum is a Professor of Computer Science at UmeåUniversity,and an Associated Professor at Delft University of Technology. She leads the Social and Ethical Artificial Intelligence research group. Her research and writing considers responsible AI and the development evaluation of human-agent team work,thereby aligning with Human-Centered Artificial Intelligence themes.
Sandra Wachter is a professor and senior researcher in data ethics,artificial intelligence,robotics,algorithms and regulation at the Oxford Internet Institute. She is a former Fellow of The Alan Turing Institute.
The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI);it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally,including in the European Union and in supra-national bodies like the IEEE,OECD and others. Since 2016,a wave of AI ethics guidelines have been published in order to maintain social control over the technology. Regulation is considered necessary to both encourage AI and manage associated risks. In addition to regulation,AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI,and take accountability to mitigate the risks. Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.
Wendell Wallach is a bioethicist and author focused on the ethics and governance of emerging technologies,in particular artificial intelligence and neuroscience. He is a scholar at Yale University's Interdisciplinary Center for Bioethics,a senior advisor to The Hastings Center,a Carnegie/Uehiro Senior Fellow at the Carnegie Council for Ethics in International Affairs,and a fellow at the Center for Law and Innovation at the Sandra Day O'Connor School of Law at Arizona State University. He has written two books on the ethics of emerging technologies.:"Moral Machines:Teaching Robots Right from Wrong" (2010) and "A Dangerous Master:How to Keep Technology from Slipping Beyond Our Control" (2015). Wallach speaks eloquently about his professional,personal and spiritual journey,as well as some of the biggest conundrums facing humanity at the wake of the bio/digital revolution in this podcast published by the Carnegie Council for Ethics in International Affairs (CCEIA).
Kay Firth-Butterfield is a lawyer,professor,and author specializing in the intersection of artificial intelligence,international relations,and AI ethics. She is currently serving as the head of AI and machine learning at the World Economic Forum. She was an adjunct professor of law at the University of Texas at Austin.
Buddhism offers unique perspectives and insights to the recent emergence of artificial intelligence (AI). Numerous groups and organizations have advised ethical guidelines for artificial intelligence development,endeavouring to make AI systems reflect human ethical and moral values. However,the current contribution towards AI ethical guidelines is unevenly distributed:the majority of them are made by private companies and governmental agencies from more economically developed countries. This causes concerns regarding diversity and inclusion,as existing AI development principles mostly reflect western values. This also highlights the potential to include non-western views,such as Buddhism,in recent AI ethical development.
{{cite book}}
: CS1 maint: others (link){{cite web}}
: CS1 maint: url-status (link)