Alan Winfield | |
---|---|
![]() Winfield in 2016 | |
Born | 1956 (age 67–68) |
Alma mater | University of Hull |
Scientific career | |
Fields | |
Institutions | |
Thesis | Maximum-Likelihood Sequential Decoding of Convolutional Error-Correcting Codes (1984) |
Doctoral advisor | Dr Rodney Goodman |
Website | https://people.uwe.ac.uk/Person/AlanWinfield |
Alan Winfield CEng (born 1956) is a British engineer and educator. [1] He is Professor of Robot Ethics at UWE Bristol, [2] Honorary Professor at the University of York, [3] and Associate Fellow in the Cambridge Centre for the Future of Intelligence. [4] He chairs the advisory board of the Responsible Technology Institute, University of Oxford. [5]
Winfield is known for research in swarm robotics, [6] [7] [8] [9] robots modelling cultural evolution, [10] [11] [12] and self-modelling (including ethical) robots. [13] [14] [15] [16] [12] He is also known for advocacy and standards development in robot and AI ethics, [17] [18] [19] [20] and for proposing that all robots should be equipped with the equivalent of a flight data recorder. [21]
Winfield was born in Burton upon Trent where he attended Burton Grammar School. [22] He studied electronic engineering for both BSc and PhD, majoring in telecommunications, at the University of Hull from 1974 to 1984. Following his first degree he won an SERC scholarship for doctoral study in the field of information theory and error-correcting codes under the supervision of Rodney Goodman. [23]
Winfield's first faculty appointment was as lecturer in the department of electronic engineering at the University of Hull, from 1981 to 1984. During this period he wrote a guide to the programming language Forth, The Complete Forth, Wiley, 1983. [24] Winfield also invented an architecture for executing native Forth at machine level. [25]
In 1984 Winfield resigned his lectureship and founded, with Rod Goodman, Metaforth Computer Systems Ltd, with the aim of commercializing the Forth machine. [26] [27]
In 1992 Winfield was appointed Hewlett-Packard Professor of Electronic Engineering and Associate Dean (Research) at UWE, Bristol, [28] where he co-founded the Bristol Robotics Laboratory. From 2009 to 2016 he was director of UWE's Science Communication Unit. [29]
Winfield is a member of the editorial boards of the Journal of Experimental and Theoretical Artificial Intelligence, [30] and the Journal of AI and Ethics. [31] He is also an associate editor of Frontiers Robotics and AI. [32]
From 2006 to 2009, with Noel Sharkey, Owen Holland and Frank Burnet, [33] Winfield led public engagement project Walking with Robots. [34] The project was designed to encourage children into science and technology careers, and to involve the public in discussions about robotics research issues. [35] In 2010 Walking with Robots was awarded the Royal Academy of Engineering Rooke Medal for public promotion of engineering. [36]
In 2009 Winfield won an EPSRC Senior Media Fellowship to support and develop his engagement with the press and media. [37] During the fellowship Winfield wrote popular science book Robotics: A Very Short Introduction , Oxford University Press, 2012. [38]
Winfield has given public lectures and panel debates including: British Academy debate 'Does AI pose a threat to society?' with Maja Pantic, Samantha Payne and Christian List chaired by Claire Craig, [39] [19] lectures and Q&A with Raja Chatila at the Royal Institution, [40] [41] talks and Q&A with Ron Arkin at 'Smarter Together': Why AI Needs Human-Choice? in Seoul, [42] the CaSE Annual Lecture with Jim Al-Khalili and Wendy Hall, Institute of Physics, [43] and the keynote lecture for the 15th Appleton Space Conference at the Rutherford Appleton Laboratory. [44]
In February 2017 Winfield was a guest of Jim Al-Khalili on BBC Radio 4's The Life Scientific, [45] and in October 2017 he was interviewed by Stephen Sackur for BBC TV HARDtalk. [46] [47]
In 2010 Winfield was a part of a cross-disciplinary group that drafted the EPSRC/AHRC Principles of Robotics. [48] [49] Inspired by Asimov's Laws of robotics, the principles take the position that "robots are simply tools, for which humans must take responsibility". [50] In 2012 Winfield joined the British Standards Institute working group on robot ethics [51] which drafted BS 8611:2016 Robots and robotic devices: Guide to the ethical design and application of robots and robotic systems. [18]
From 2015 to 2018 Winfield was a member of the Ethics Advisory Board of the EU Human Brain Project. [52] Between 2016 and 2018 he served as a member of the World Economic Forum Global Futures Council on Technology Values and Policy. [53] Winfield has given evidence to both Commons and Lords select committee inquiries on Artificial Intelligence in the UK parliament. [54] [55] He served as an expert advisor to the NHS Health Education England Topol Review Preparing the healthcare workforce to deliver the digital future. [56]
In 2016 Winfield joined the IEEE Global Initiative on ethics of Intelligence and Autonomous Systems. As chair of the General Principles group [57] he helped to draft Ethically Aligned Design. [58] He is a member of the initiative's executive committee, [59] and chaired the working group that drafted IEEE Standard 7001-2021 on Transparency of Autonomous Systems. [60] Winfield received an IEEE Special Recognition Award in 2021. [61]
His work has been reported by the BBC, [11] [46] New Scientist , [13] The Guardian , [21] The Telegraph , [62] Nature , [15] and Scientific American . [16]
Kevin Warwick is an English engineer and Deputy Vice-Chancellor (Research) at Coventry University. He is known for his studies on direct interfaces between computer systems and the human nervous system, and has also done research concerning robotics.
Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.
An AI takeover is a scenario in which artificial intelligence (AI) becomes the dominant form of intelligence on Earth, as computer programs or robots effectively take control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI, and the popular notion of a robot uprising. Stories of AI takeovers are popular throughout science fiction. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.
Laws of robotics are any set of laws, rules, or principles, which are intended as a fundamental framework to underpin the behavior of robots designed to have a degree of autonomy. Robots of this degree of complexity do not yet exist, but they have been widely anticipated in science fiction, films and are a topic of active research and development in the fields of robotics and artificial intelligence.
Computer ethics is a part of practical philosophy concerned with how computing professionals should make decisions regarding professional and social conduct.
Robot ethics, sometimes known as "roboethics", concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic, and how robots should be designed such that they act 'ethically'. Alternatively, roboethics refers specifically to the ethics of human behavior towards robots, as robots become increasingly advanced. Robot ethics is a sub-field of ethics of technology, specifically information technology, and it has close links to legal as well as socio-economic concerns. Researchers from diverse areas are beginning to tackle ethical questions about creating robotic technology and implementing it in societies, in a way that will still ensure the safety of the human race.
The ethics of artificial intelligence is the branch of the ethics of technology specific to artificial intelligence (AI) systems.
Manuela Maria Veloso is the Head of J.P. Morgan AI Research & Herbert A. Simon University Professor Emeritus in the School of Computer Science at Carnegie Mellon University, where she was previously Head of the Machine Learning Department. She served as president of Association for the Advancement of Artificial Intelligence (AAAI) until 2014, and the co-founder and a Past President of the RoboCup Federation. She is a fellow of AAAI, Institute of Electrical and Electronics Engineers (IEEE), American Association for the Advancement of Science (AAAS), and Association for Computing Machinery (ACM). She is an international expert in artificial intelligence and robotics.
Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. It should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with technology's grander social effects.
Lethal autonomous weapons (LAWs) are a type of autonomous military system that can independently search for and engage targets based on programmed constraints and descriptions. LAWs are also known as lethal autonomous weapon systems (LAWS), autonomous weapon systems (AWS), robotic weapons or killer robots. LAWs may operate in the air, on land, on water, underwater, or in space. The autonomy of current systems as of 2018 was restricted in the sense that a human gives the final command to attack—though there are exceptions with certain "defensive" systems.
The Bristol Robotics Laboratory (BRL), established in 2005, is the largest academic centre for multi-disciplinary robotics research in the UK. It is the result of a collaboration between the University of Bristol and the University of the West of England in Bristol and is situated on UWE's Frenchay Campus. An internationally recognised Centre of Excellence in Robotics, the Bristol Robotics Laboratory covers an area of over 4,600 sq. metres. The Laboratory is currently involved in interdisciplinary research projects addressing key areas of robot capabilities and applications including human-robot interaction, unmanned aerial vehicles, driverless cars, swarm robotics, non-linear control, machine vision, robot ethics and soft robotics. The BRL co-directors are Professors Arthur Richards and Matthew Studley.
Robotic governance provides a regulatory framework to deal with autonomous and intelligent machines. This includes research and development activities as well as handling of these machines. The idea is related to the concepts of corporate governance, technology governance and IT-governance, which provide a framework for the management of organizations or the focus of a global IT infrastructure.
Kate Devlin, born Adela Katharine Devlin is a Northern Irish computer scientist specialising in Artificial intelligence and Human–computer interaction (HCI). She is best known for her work on human sexuality and robotics and was co-chair of the annual Love and Sex With Robots convention in 2016 held in London and was founder of the UK's first ever sex tech hackathon held in 2016 at Goldsmiths, University of London. She is Senior Lecturer in Social and Cultural Artificial Intelligence in the Department of Digital Humanities, King's College London and is the author of Turned On: Science, Sex and Robots in addition to several academic papers.
Joanna Joy Bryson is professor at Hertie School in Berlin. She works on Artificial Intelligence, ethics and collaborative cognition. She has been a British citizen since 2007.
Marina Denise Anne Jirotka is professor of human-centered computing at the University of Oxford, director of the Responsible Technology Institute, governing body fellow at St Cross College, board member of the Society for Computers and Law and a research associate at the Oxford Internet Institute. She leads a team that works on responsible innovation, in a range of ICT fields including robotics, AI, machine learning, quantum computing, social media and the digital economy. She is known for her work with Alan Winfield on the 'Ethical Black Box'. A proposal that robots using AI should be fitted with a type of inflight recorder, similar to those used by aircraft, to track the decisions and actions of the AI when operating in an uncontrolled environment and to aid in post-accident investigations.
Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning. For the subset of AI algorithms, the term regulation of artificial intelligence is used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally, including in the European Union. Regulation of AI is considered necessary to both encourage AI and manage associated risks, but challenging. Another emerging topic is the regulation of blockchain algorithms and is mentioned along with regulation of AI algorithms. Many countries have enacted regulations of high frequency trades, which is shifting due to technological progress into the realm of AI algorithms.
The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union and in supra-national bodies like the IEEE, OECD and others. Since 2016, a wave of AI ethics guidelines have been published in order to maintain social control over the technology. Regulation is considered necessary to both encourage AI and manage associated risks. In addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks. Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.
Mariarosaria Taddeo is an Italian philosopher working on the ethics of digital technologies. She is Professor of Digital Ethics and Defence Technologies at the Oxford Internet Institute, University of Oxford and Dslt Ethics Fellow at the Alan Turing Institute, London.
Kay Firth-Butterfield is a lawyer, professor, and author specializing in the intersection of artificial intelligence, international relations, Business and AI ethics. She is the CEO of the Centre for Trustworthy Technology which is a Member of the World Economic Forum's Forth Industrial Revolution Network. Before starting her new position Kay was the head of AI and machine learning at the World Economic Forum. She was an adjunct professor of law at the University of Texas at Austin.
Stephen Roberts FREng is a British academic and scientist. He is a professor of machine learning at University of Oxford and leads the Machine Learning Research Group, a sub-group of the Department of Engineering Science.