Alan Winfield

Last updated

Alan Winfield
Winfield DAVOS 2016.jpg
Winfield in 2016
Born1956 (age 6768)
Alma mater University of Hull
Scientific career
Fields
Institutions
Thesis Maximum-Likelihood Sequential Decoding of Convolutional Error-Correcting Codes  (1984)
Doctoral advisor Dr Rodney Goodman
Website https://people.uwe.ac.uk/Person/AlanWinfield

Alan Winfield CEng (born 1956) is a British engineer and educator. [1] He is Professor of Robot Ethics at UWE Bristol, [2] Honorary Professor at the University of York, [3] and Associate Fellow in the Cambridge Centre for the Future of Intelligence. [4] He chairs the advisory board of the Responsible Technology Institute, University of Oxford. [5]

Contents

Winfield is known for research in swarm robotics, [6] [7] [8] [9] robots modelling cultural evolution, [10] [11] [12] and self-modelling (including ethical) robots. [13] [14] [15] [16] [12] He is also known for advocacy and standards development in robot and AI ethics, [17] [18] [19] [20] and for proposing that all robots should be equipped with the equivalent of a flight data recorder. [21]

Early life and education

Winfield was born in Burton upon Trent where he attended Burton Grammar School. [22] He studied electronic engineering for both BSc and PhD, majoring in telecommunications, at the University of Hull from 1974 to 1984. Following his first degree he won an SERC scholarship for doctoral study in the field of information theory and error-correcting codes under the supervision of Rodney Goodman. [23]

Career

Winfield's first faculty appointment was as lecturer in the department of electronic engineering at the University of Hull, from 1981 to 1984. During this period he wrote a guide to the programming language Forth, The Complete Forth, Wiley, 1983. [24] Winfield also invented an architecture for executing native Forth at machine level. [25]

In 1984 Winfield resigned his lectureship and founded, with Rod Goodman, Metaforth Computer Systems Ltd, with the aim of commercializing the Forth machine. [26] [27]

In 1992 Winfield was appointed Hewlett-Packard Professor of Electronic Engineering and Associate Dean (Research) at UWE, Bristol, [28] where he co-founded the Bristol Robotics Laboratory. From 2009 to 2016 he was director of UWE's Science Communication Unit. [29]

Winfield is a member of the editorial boards of the Journal of Experimental and Theoretical Artificial Intelligence, [30] and the Journal of AI and Ethics. [31] He is also an associate editor of Frontiers Robotics and AI. [32]

Public engagement

From 2006 to 2009, with Noel Sharkey, Owen Holland and Frank Burnet, [33] Winfield led public engagement project Walking with Robots. [34] The project was designed to encourage children into science and technology careers, and to involve the public in discussions about robotics research issues. [35] In 2010 Walking with Robots was awarded the Royal Academy of Engineering Rooke Medal for public promotion of engineering. [36]

In 2009 Winfield won an EPSRC Senior Media Fellowship to support and develop his engagement with the press and media. [37] During the fellowship Winfield wrote popular science book Robotics: A Very Short Introduction , Oxford University Press, 2012. [38]

Winfield has given public lectures and panel debates including: British Academy debate 'Does AI pose a threat to society?' with Maja Pantic, Samantha Payne and Christian List chaired by Claire Craig, [39] [19] lectures and Q&A with Raja Chatila at the Royal Institution, [40] [41] talks and Q&A with Ron Arkin at 'Smarter Together': Why AI Needs Human-Choice? in Seoul, [42] the CaSE Annual Lecture with Jim Al-Khalili and Wendy Hall, Institute of Physics, [43] and the keynote lecture for the 15th Appleton Space Conference at the Rutherford Appleton Laboratory. [44]

In February 2017 Winfield was a guest of Jim Al-Khalili on BBC Radio 4's The Life Scientific, [45] and in October 2017 he was interviewed by Stephen Sackur for BBC TV HARDtalk. [46] [47]

Robot and AI Ethics

In 2010 Winfield was a part of a cross-disciplinary group that drafted the EPSRC/AHRC Principles of Robotics. [48] [49] Inspired by Asimov's Laws of robotics, the principles take the position that "robots are simply tools, for which humans must take responsibility". [50] In 2012 Winfield joined the British Standards Institute working group on robot ethics [51] which drafted BS 8611:2016 Robots and robotic devices: Guide to the ethical design and application of robots and robotic systems. [18]

From 2015 to 2018 Winfield was a member of the Ethics Advisory Board of the EU Human Brain Project. [52] Between 2016 and 2018 he served as a member of the World Economic Forum Global Futures Council on Technology Values and Policy. [53] Winfield has given evidence to both Commons and Lords select committee inquiries on Artificial Intelligence in the UK parliament. [54] [55] He served as an expert advisor to the NHS Health Education England Topol Review Preparing the healthcare workforce to deliver the digital future. [56]

In 2016 Winfield joined the IEEE Global Initiative on ethics of Intelligence and Autonomous Systems. As chair of the General Principles group [57] he helped to draft Ethically Aligned Design. [58] He is a member of the initiative's executive committee, [59] and chaired the working group that drafted IEEE Standard 7001-2021 on Transparency of Autonomous Systems. [60] Winfield received an IEEE Special Recognition Award in 2021. [61]

His work has been reported by the BBC, [11] [46] New Scientist , [13] The Guardian , [21] The Telegraph , [62] Nature , [15] and Scientific American . [16]

Selected publications

Related Research Articles

<span class="mw-page-title-main">Kevin Warwick</span> British engineer and robotics researcher

Kevin Warwick is an English engineer and Deputy Vice-Chancellor (Research) at Coventry University. He is known for his studies on direct interfaces between computer systems and the human nervous system, and has also done research concerning robotics.

Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.

<span class="mw-page-title-main">AI takeover</span> Hypothetical artificial intelligence scenario

An AI takeover is a scenario in which artificial intelligence (AI) becomes the dominant form of intelligence on Earth, as computer programs or robots effectively take control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI, and the popular notion of a robot uprising. Stories of AI takeovers are popular throughout science fiction. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

Laws of robotics are any set of laws, rules, or principles, which are intended as a fundamental framework to underpin the behavior of robots designed to have a degree of autonomy. Robots of this degree of complexity do not yet exist, but they have been widely anticipated in science fiction, films and are a topic of active research and development in the fields of robotics and artificial intelligence.

Computer ethics is a part of practical philosophy concerned with how computing professionals should make decisions regarding professional and social conduct.

Robot ethics, sometimes known as "roboethics", concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic, and how robots should be designed such that they act 'ethically'. Alternatively, roboethics refers specifically to the ethics of human behavior towards robots, as robots become increasingly advanced. Robot ethics is a sub-field of ethics of technology, specifically information technology, and it has close links to legal as well as socio-economic concerns. Researchers from diverse areas are beginning to tackle ethical questions about creating robotic technology and implementing it in societies, in a way that will still ensure the safety of the human race.

The ethics of artificial intelligence is the branch of the ethics of technology specific to artificial intelligence (AI) systems.

<span class="mw-page-title-main">Manuela M. Veloso</span> Portuguese-American computer scientist

Manuela Maria Veloso is the Head of J.P. Morgan AI Research & Herbert A. Simon University Professor Emeritus in the School of Computer Science at Carnegie Mellon University, where she was previously Head of the Machine Learning Department. She served as president of Association for the Advancement of Artificial Intelligence (AAAI) until 2014, and the co-founder and a Past President of the RoboCup Federation. She is a fellow of AAAI, Institute of Electrical and Electronics Engineers (IEEE), American Association for the Advancement of Science (AAAS), and Association for Computing Machinery (ACM). She is an international expert in artificial intelligence and robotics.

Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. It should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with technology's grander social effects.

<span class="mw-page-title-main">Lethal autonomous weapon</span> Autonomous military technology system

Lethal autonomous weapons (LAWs) are a type of autonomous military system that can independently search for and engage targets based on programmed constraints and descriptions. LAWs are also known as lethal autonomous weapon systems (LAWS), autonomous weapon systems (AWS), robotic weapons or killer robots. LAWs may operate in the air, on land, on water, underwater, or in space. The autonomy of current systems as of 2018 was restricted in the sense that a human gives the final command to attack—though there are exceptions with certain "defensive" systems.

<span class="mw-page-title-main">Bristol Robotics Laboratory</span>

The Bristol Robotics Laboratory (BRL), established in 2005, is the largest academic centre for multi-disciplinary robotics research in the UK. It is the result of a collaboration between the University of Bristol and the University of the West of England in Bristol and is situated on UWE's Frenchay Campus. An internationally recognised Centre of Excellence in Robotics, the Bristol Robotics Laboratory covers an area of over 4,600 sq. metres. The Laboratory is currently involved in interdisciplinary research projects addressing key areas of robot capabilities and applications including human-robot interaction, unmanned aerial vehicles, driverless cars, swarm robotics, non-linear control, machine vision, robot ethics and soft robotics. The BRL co-directors are Professors Arthur Richards and Matthew Studley.

Robotic governance provides a regulatory framework to deal with autonomous and intelligent machines. This includes research and development activities as well as handling of these machines. The idea is related to the concepts of corporate governance, technology governance and IT-governance, which provide a framework for the management of organizations or the focus of a global IT infrastructure.

<span class="mw-page-title-main">Kate Devlin</span> Northern Irish computer scientist, AI specialist

Kate Devlin, born Adela Katharine Devlin is a Northern Irish computer scientist specialising in Artificial intelligence and Human–computer interaction (HCI). She is best known for her work on human sexuality and robotics and was co-chair of the annual Love and Sex With Robots convention in 2016 held in London and was founder of the UK's first ever sex tech hackathon held in 2016 at Goldsmiths, University of London. She is Senior Lecturer in Social and Cultural Artificial Intelligence in the Department of Digital Humanities, King's College London and is the author of Turned On: Science, Sex and Robots in addition to several academic papers.

<span class="mw-page-title-main">Joanna Bryson</span> Researcher and Professor of Ethics and Technology

Joanna Joy Bryson is professor at Hertie School in Berlin. She works on Artificial Intelligence, ethics and collaborative cognition. She has been a British citizen since 2007.

Marina Denise Anne Jirotka is professor of human-centered computing at the University of Oxford, director of the Responsible Technology Institute, governing body fellow at St Cross College, board member of the Society for Computers and Law and a research associate at the Oxford Internet Institute. She leads a team that works on responsible innovation, in a range of ICT fields including robotics, AI, machine learning, quantum computing, social media and the digital economy. She is known for her work with Alan Winfield on the 'Ethical Black Box'. A proposal that robots using AI should be fitted with a type of inflight recorder, similar to those used by aircraft, to track the decisions and actions of the AI when operating in an uncontrolled environment and to aid in post-accident investigations.

Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning. For the subset of AI algorithms, the term regulation of artificial intelligence is used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally, including in the European Union. Regulation of AI is considered necessary to both encourage AI and manage associated risks, but challenging. Another emerging topic is the regulation of blockchain algorithms and is mentioned along with regulation of AI algorithms. Many countries have enacted regulations of high frequency trades, which is shifting due to technological progress into the realm of AI algorithms.

The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union and in supra-national bodies like the IEEE, OECD and others. Since 2016, a wave of AI ethics guidelines have been published in order to maintain social control over the technology. Regulation is considered necessary to both encourage AI and manage associated risks. In addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks. Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.

Mariarosaria Taddeo is an Italian philosopher working on the ethics of digital technologies. She is Professor of Digital Ethics and Defence Technologies at the Oxford Internet Institute, University of Oxford and Dslt Ethics Fellow at the Alan Turing Institute, London.

<span class="mw-page-title-main">Kay Firth-Butterfield</span> Law and AI ethics professor & author

Kay Firth-Butterfield is a lawyer, professor, and author specializing in the intersection of artificial intelligence, international relations, Business and AI ethics. She is the CEO of the Centre for Trustworthy Technology which is a Member of the World Economic Forum's Forth Industrial Revolution Network. Before starting her new position Kay was the head of AI and machine learning at the World Economic Forum. She was an adjunct professor of law at the University of Texas at Austin.

Stephen Roberts FREng is a British academic and scientist. He is a professor of machine learning at University of Oxford and leads the Machine Learning Research Group, a sub-group of the Department of Engineering Science.

References

  1. "Alan Winfield interviewed by Peter Asaro for the IEEE Robotics and Automation Society Robotics History project". ieee-ras.org. Retrieved 9 May 2023.
  2. "Professor Alan Winfield". uwe.ac.uk. Retrieved 9 May 2023.
  3. "School of Physics, Engineering and Technology". york.ac.uk. Retrieved 24 May 2023.
  4. "Alan Winfield Associate Fellow". lcfi.ac.uk. Retrieved 9 May 2023.
  5. "Responsible Technology Institute Advisory Board". ox.ac.uk. Retrieved 3 August 2023.
  6. "Robots with a mind of their own". ITV News. 13 March 2008. Retrieved 8 July 2023.
  7. "Material World". BBC Radio 4. 8 May 2008. Retrieved 8 July 2023.
  8. "Hive hopes". The Engineer. 16 June 2008. Retrieved 8 July 2023.
  9. "In Interview: Alan Winfield". sciencemuseum.org.uk. Science Museum. 30 November 2011. Retrieved 8 July 2023.
  10. "Will Big Brother be cultural watershed for robots?". Times Higher Education. 27 April 2007. Retrieved 8 July 2023.
  11. 1 2 Mark Ward (8 June 2012). "Dancing robots reveal cultural cues". BBC News. Retrieved 1 June 2023.
  12. 1 2 Brian Gallagher (23 March 2022). "Robots Show Us Who We Are". Nautilus. Retrieved 1 June 2023.
  13. 1 2 Aviva Rutkin (10 September 2014). "Ethical trap: robot paralysed by choice of who to save". New Scientist. Retrieved 9 June 2023.
  14. Soline Roy (14 November 2014). "Un robot face à un dilemme". Le Figaro (in French). Retrieved 9 July 2023.
  15. 1 2 Boer Deng (1 July 2015). "Machine ethics: The robot's dilemma". Nature. 523 (7558): 24–26. Bibcode:2015Natur.523...24D. doi: 10.1038/523024a . PMID   26135432. S2CID   4459500.
  16. 1 2 Chris Baraniuk (17 August 2018). "How to Make a Robot Use Theory of Mind". Scientific American. Retrieved 2 June 2023.
  17. Tessel Renzenbrink (22 January 2016). "Ethical Robots and Robot Ethics". Elektor. Retrieved 1 June 2023.
  18. 1 2 Hannah Devlin (18 September 2016). "Do no harm, don't discriminate: official guidance issued on robot ethics". The Guardian. Retrieved 11 July 2023.
  19. 1 2 Sameer Rahim (20 March 2017). "Does AI pose a threat to society?". Prospect Magazine. Retrieved 8 July 2023.
  20. Katie Strick (31 May 2023). "Is the AI apocalypse actually coming? What life could look like if robots take over". London Evening Standard. Retrieved 8 July 2023.
  21. 1 2 Ian Sample (19 July 2017). "Give robots an 'ethical black box' to track and explain decisions, say scientists". The Guardian. Retrieved 2 June 2023.
  22. "Burton Grammar School Old Boys' Association". burtongrammar.co.uk. Retrieved 9 May 2023.
  23. "Rodney M. Goodman Curriculum Vitae" (PDF). rod.goodman.name. Retrieved 9 May 2023.
  24. Alan Winfield (1983). The Complete Forth. Wiley. ISBN   9780471882350 . Retrieved 9 May 2023.
  25. "United States Patent no 4,974,157, Data Processing System" (PDF). Retrieved 9 May 2023.
  26. "APD Communications Ltd". company-information.service.gov.uk. Retrieved 11 May 2023.
  27. Dick Pountain (March 1985). "Byte UK: Multitasking Forth". Byte. McGraw-Hill. pp. 363–371. Retrieved 12 May 2023.
  28. "Alan FT Winfield". 19 January 2004. Archived from the original on 12 April 2005. Retrieved 19 July 2023.
  29. "Science Communication Unit members". uwe.ac.uk. Archived from the original on 22 April 2016. Retrieved 12 May 2023.
  30. "JETAI Editorial Board". tandfonline.com. Retrieved 13 July 2023.
  31. "AI and Ethics Editors". springer.com. Retrieved 13 July 2023.
  32. "Frontiers Learning and Evolution Editors". frontiersin.org. Retrieved 13 July 2023.
  33. "Frank Burnet". linkedin.com. Retrieved 31 May 2023.
  34. "EPSRC Grants on the web". epsrc.ukri.org. Retrieved 31 May 2023.
  35. Christine Evans-Pughe (4 April 2007). "Masters of their fate?". Engineering and Technology. Retrieved 31 May 2023.
  36. "RAEng Rooke Medal previous winners". raeng.org.uk. Retrieved 31 May 2023.
  37. "EPSRC Grants on the Web". epsrc.ukri.org. Retrieved 3 June 2023.
  38. Alan Winfield (27 September 2012). Robotics: A Very Short Introduction. Oxford University Press. doi:10.1093/actrade/9780199695980.001.0001. ISBN   978-0-19-969598-0 . Retrieved 3 June 2023.
  39. "Does AI pose a threat to society?". thebritishacademy.ac.uk. 1 March 2017. Retrieved 19 July 2023.
  40. "Robot Ethics in the 21st Century". YouTube . 22 June 2017. Retrieved 18 June 2023.
  41. "Q&A Robot Ethics in the 21st Century". YouTube . 22 June 2017. Retrieved 18 June 2023.
  42. "Robot Ethics: from principles to policy". sisain.co.kr. 14 August 2018. Retrieved 18 July 2023.
  43. "CaSE Annual Lecture 2018: 'Making Artificial Intelligence A Reality'". sciencecampaign.org.uk. 20 December 2018. Retrieved 18 June 2023.
  44. "15th Appleton Space Conference". ralspace.stfc.ac.uk. 5 December 2019. Retrieved 18 June 2023.
  45. "The Life Scientific". bbc.co.uk. 21 February 2017. Retrieved 31 May 2023.
  46. 1 2 "Winfield HARDtalk clip 'We need to worry about artificial stupidity'". BBC News. 31 October 2017. Retrieved 31 May 2023.
  47. "HARDtalk full interview Alan Winfield". youtube.com. 31 October 2017. Retrieved 3 June 2023.
  48. "Principles of robotics". nationalarchives.gov.uk. Retrieved 9 June 2023.
  49. Alan Winfield (4 May 2011). "Five roboethical principles – for humans". New Scientist. Archived from the original on 13 April 2016. Retrieved 9 June 2023.
  50. Benjamin Kuipers (2 June 2016). "Beyond Asimov: how to plan for ethical robots". The Conversation. Retrieved 9 June 2023.
  51. "AMT/10/1 – Ethics for Robots and Autonomous Systems". bsigroup.com. BSI. Retrieved 11 July 2023.
  52. "The Ethics Advisory Board (EAB)". humanbrainproject.eu. Archived from the original on 2 February 2019. Retrieved 19 July 2023.
  53. "Network of Global Future Councils 2016–2018" (PDF). weforum.org. Retrieved 5 June 2023.
  54. "Written evidence submitted by Professor Alan Winfield (ROB0070)" (PDF). parliament.uk. February 2016. Retrieved 5 June 2023.
  55. "Lords Select Committee on Artificial Intelligence: oral evidence". parliament.uk. October 2017. Retrieved 5 June 2023.
  56. "Preparing the healthcare workforce to deliver the digital future" (PDF). hee.nhs.uk. February 2019. Retrieved 5 June 2023.
  57. "IEEE EAD First Edition Committees List" (PDF). standards.ieee.org. Retrieved 31 May 2023.
  58. "Ethically Aligned Design" (PDF). Retrieved 31 May 2023.
  59. "The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems" (PDF). standards.ieee.org. Retrieved 31 May 2023.
  60. "IEEE Standard 7001-2021 Transparency of Autonomous Systems". standards.ieee.org. 4 March 2022. Retrieved 31 May 2023.
  61. "2021 IEEE SA Awards – IEEE SA Managing Director's Special Recognition Award Given to Alan Winfield". YouTube . Retrieved 13 July 2023.
  62. Ellie Zolfagharifard (14 March 2021). "The British engineers creating robots that 'breed'". The Telegraph. Retrieved 1 June 2023.