Marina Jirotka

Last updated

Marina Denise Anne Jirotka [1] is professor of human-centered computing at the University of Oxford, director of the Responsible Technology Institute, governing body fellow at St Cross College, [2] board member of the Society for Computers and Law [3] and a research associate at the Oxford Internet Institute. [4] She leads a team that works on responsible innovation, in a range of ICT fields including robotics, AI, machine learning, quantum computing, social media and the digital economy. She is known for her work with Alan Winfield on the 'Ethical Black Box'. [5] [6] A proposal that robots using AI should be fitted with a type of inflight recorder, similar to those used by aircraft, to track the decisions and actions of the AI when operating in an uncontrolled environment and to aid in post-accident investigations. [5] [7]

Contents

Education

Jirotka obtained her BSc in psychology and social anthropology from Goldsmiths College in 1985 and her Master's in Computing and Artificial Intelligence from the University of South Bank in 1987. Her doctorate in Computer Science, An Investigation into Contextual Approaches to Requirements Capture, [8] was completed at the University of Oxford in 2000.

Career

In 1987 Jirotka was appointed research fellow in the University of Surrey Social and Computer Sciences Research Group. In 1991 she joined the University of Oxford as a senior researcher in the Department of Computer Science, becoming a university lecturer and governing body fellow of St Cross College in 2003. In 2008 she became reader in requirements engineering and was promoted to professor of human-centred computing in 2014.[ citation needed ]

Research

Jirotka's most recent work centres around the 'ethical black box' - a way of making algorithmic decisions explainable after an unexpected event or accident. [5] [9] The model for this approach is the aviation industry that uses inflight recording systems to provide evidence after an accident. This approach to explainability in robotics is developed from Jirotka's earlier work on Responsible Research and Innovation (RRI), a research method supported by the EU's [10] Horizon 2020 theme and the UK research councils, particularly the EPSRC. RRI includes the use of reflection, stakeholder involvement and anticipatory governance to try to ameliorate potential negative effects of research and development. It is considered to be particularly applicable in ICT disciplines where failing to consider possible negative outcomes can have serious repercussions. Jirotka contributed significantly to the evolution of RRI after her work on the Framework for Responsible Research and Innovation in ICT project (FRRIICT) that was adopted for rollout in the UK by EPSRC. [11]

Jirotka's other projects include:

Expert opinion

Jirotka has frequently given evidence to Select Committees, Advisory Boards, All-Party Parliamentary Groups and industry bodies. She sits on the Steering Committee of the APPG on Data Analytics [15] and the Advisory Board of the Society for Computers and Law. She regularly appears on expert panels to discuss ethical approaches to innovation [16] and is also an international speaker on the issues arising from a lack of diversity in science. [17]

Novel contributions

Along with members of her team, Jirotka formulated the concept of the Ethical Hackathon. [18] This is a technique based on the traditional type of hackathon that additionally incorporates a focus on ethical issues such as assessing the work's impact on (for example) minority groups or vulnerable users. The concept was trialled during work on the UnBias project and has since been used in a Zimbabwe LabHack [19] and in training doctoral students at Oxford.

Publications

Related Research Articles

<span class="mw-page-title-main">Kevin Warwick</span> British engineer and robotics researcher

Kevin Warwick is an English engineer and Deputy Vice-Chancellor (Research) at Coventry University. He is known for his studies on direct interfaces between computer systems and the human nervous system, and has also done research concerning robotics.

Laws of robotics are any set of laws, rules, or principles, which are intended as a fundamental framework to underpin the behavior of robots designed to have a degree of autonomy. Robots of this degree of complexity do not yet exist, but they have been widely anticipated in science fiction, films and are a topic of active research and development in the fields of robotics and artificial intelligence.

The Engineering and Physical Sciences Research Council (EPSRC) is a British Research Council that provides government funding for grants to undertake research and postgraduate degrees in engineering and the physical sciences, mainly to universities in the United Kingdom. EPSRC research areas include mathematics, physics, chemistry, artificial intelligence and computer science, but exclude particle physics, nuclear physics, space science and astronomy. Since 2018 it has been part of UK Research and Innovation, which is funded through the Department for Business, Energy and Industrial Strategy.

The Science and Technology Facilities Council (STFC) is a United Kingdom government agency that carries out research in science and engineering, and funds UK research in areas including particle physics, nuclear physics, space science and astronomy.

<span class="mw-page-title-main">Ethics of artificial intelligence</span> Ethical issues specific to AI

The ethics of artificial intelligence is the branch of the ethics of technology specific to artificially intelligent systems. It is sometimes divided into a concern with the moral behavior of humans as they design, make, use and treat artificially intelligent systems, and a concern with the behavior of machines, in machine ethics.

<span class="mw-page-title-main">Nigel Shadbolt</span> Principal of Jesus College, Oxford

Sir Nigel Richard Shadbolt is Principal of Jesus College, Oxford, and Professorial Research Fellow in the Department of Computer Science, University of Oxford. He is chairman of the Open Data Institute which he co-founded with Tim Berners-Lee. He is also a visiting professor in the School of Electronics and Computer Science at the University of Southampton. Shadbolt is an interdisciplinary researcher, policy expert and commentator. His research focuses on understanding how intelligent behaviour is embodied and emerges in humans, machines and, most recently, on the Web, and has made contributions to the fields of Psychology, Cognitive science, Computational neuroscience, Artificial Intelligence (AI), Computer science and the emerging field of Web science.

The Human Brain Project (HBP) was a large ten-year scientific research project, based on exascale supercomputers, that aimed to build a collaborative ICT-based scientific research infrastructure to allow researchers across Europe to advance knowledge in the fields of neuroscience, computing, and brain-related medicine.

Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. Machine ethics should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with the grander social effects of technology.

The UK Large-Scale Complex IT Systems (LSCITS) Initiative is a research and graduate education programme focusing on the problems of developing large-scale, complex IT systems. The initiative is funded by the EPSRC, with more than ten million pounds of funding awarded between 2006 and 2013.

<span class="mw-page-title-main">Alan Turing Institute</span> Research institute in Britain

The Alan Turing Institute is the United Kingdom's national institute for data science and artificial intelligence, founded in 2015 and largely funded by the UK government. It is named after Alan Turing, the British mathematician and computing pioneer.

Responsible Research and Innovation (RRI) is a term used by the European Union's Framework Programmes to describe scientific research and technological development processes that take into account effects and potential impacts on the environment and society. It gained visibility around the year 2010, arising from predecessors including "ELSA" studies prompted by the Human Genome Project. Various slightly different definitions of RRI emerged, but all of them agree that societal challenges should be a primary focus of scientific research, and moreover they agree upon the methods by which that goal should be achieved. RRI involves holding research to high ethical standards, ensuring gender equality in the scientific community, investing policy-makers with the responsibility to avoid harmful effects of innovation, engaging the communities affected by innovation and ensuring that they have the knowledge necessary to understand the implications by furthering science education and Open Access. Organizations that adopted the RRI terminology include the Engineering and Physical Sciences Research Council.

<span class="mw-page-title-main">Artificial intelligence in healthcare</span> Overview of the use of artificial intelligence in healthcare

Artificial intelligence in healthcare is an overarching term used to describe the use of machine-learning algorithms and software, or artificial intelligence (AI), to mimic human cognition in the analysis, presentation, and comprehension of complex medical and health care data, or to exceed human capabilities by providing new ways to diagnose, treat, or prevent disease. Specifically, AI is the ability of computer algorithms to approximate conclusions based solely on input data.

Francis Xavier Engineering College, Tirunelveli, is an Autonomous institution located in the town of Tirunelveli in the state of Tamil Nadu. Tirunelveli is often referred as the 'Oxford of south India' due to the larger number of educational institutions present

<span class="mw-page-title-main">Joanna Bryson</span> Researcher and Professor of Ethics and Technology

Joanna Joy Bryson is professor at Hertie School in Berlin. She works on Artificial Intelligence, ethics and collaborative cognition. She has been a British citizen since 2007.

<span class="mw-page-title-main">Tom Rodden</span>

Tom Rodden is Chief Scientific Adviser for the UK Government’s Department for Culture, Media and Sport. He was previously Deputy Chief Executive of the Engineering and Physical Sciences Research Council (EPSRC). Tom is Professor of Computing at the University of Nottingham and co-director of the Mixed Reality Laboratory, an interdisciplinary research facility. In 2008, as a member of the UK Research Assessment Exercise 2008 computing panel, he was responsible for assessing the international quality of computer science research across all UK departments. In 2014 he served in the Research Excellence Framework assessment panel for computing and he is the deputy chair of the Hong Kong RAE 2014 computing panel.

Elham Kashefi is a Professor of Computer Science and Personal Chair in quantum computing at the School of Informatics at the University of Edinburgh, and a Centre national de la recherche scientifique (CNRS) researcher at the Sorbonne University. Her work has included contributions to quantum cryptography, verification of quantum computing, and cloud quantum computing.

The Rosalind Franklin Institute is medical research centre supported by the Government of the United Kingdom located at the Harwell Science and Innovation Campus, Oxfordshire, England. It is named after an English chemist Rosalind Franklin, whose discoveries provided the key data for the correct explanation of the helical structure of DNA in 1953. Launched on 6 June 2018, it was officially opened on 29 September 2021.

The UK Infrastructure Transitions Research Consortium (ITRC) was established in January 2011. The ITRC provides data and modelling to help governments, policymakers and other stakeholders in infrastructure make more sustainable and resilient infrastructure decisions. It is a collaboration between seven universities and more than 55 partners from infrastructure policy and practice.

Automated decision-making (ADM) involves the use of data, machines and algorithms to make decisions in a range of contexts, including public administration, business, health, education, law, employment, transport, media and entertainment, with varying degrees of human oversight or intervention. ADM involves large-scale data from a range of sources, such as databases, text, social media, sensors, images or speech, that is processed using various technologies including computer software, algorithms, machine learning, natural language processing, artificial intelligence, augmented intelligence and robotics. The increasing use of automated decision-making systems (ADMS) across a range of contexts presents many benefits and challenges to human society requiring consideration of the technical, legal, ethical, societal, educational, economic and health consequences.

<span class="mw-page-title-main">Alan Winfield</span> British engineer and educator

Alan Winfield is a British engineer and educator. He is Professor of Robot Ethics at UWE Bristol, Honorary Professor at the University of York, and Associate Fellow in the Cambridge Centre for the Future of Intelligence. He chairs the advisory board of the Responsible Technology Institute, University of Oxford.

References

  1. "Marina Jirotka". Department of Computer Science.
  2. "Professor Marina Jirotka". 13 January 2023.
  3. "SCL: Home". www.scl.org.
  4. "Dr Marina Jirotka — Oxford Internet Institute". www.oii.ox.ac.uk.
  5. 1 2 3 Sample, Ian (19 July 2017). "Give robots an 'ethical black box' to track and explain decisions, say scientists". The Guardian. Retrieved 27 December 2018.
  6. Lant, Karla. "Experts Want Robots to Have an "Ethical Black Box" That Explains Their Decision-Making". Futurism. Retrieved 27 December 2018.
  7. Winfield, Alan F. T.; Jirotka, Marina (20 July 2017). "The Case for an Ethical Black Box". Towards Autonomous Robotic Systems. Lecture Notes in Computer Science. Vol. 10454. In Proc. Towards autonomous robotic systems : 18th Annual Conference, TAROS 2017, Guildford, UK, July 19-21, 2017, Springer. pp. 262–273. doi:10.1007/978-3-319-64107-2_21. ISBN   978-3-319-64106-5.
  8. Jirotka, Marina (2001). An investigation into contextual approaches to requirements capture (Thesis). University of Oxford.
  9. "Experts Want Robots to Have an "Ethical Black Box" That Explains Their Decision-Making". Futurism.
  10. kamraro (1 April 2014). "Responsible research & innovation". Horizon 2020 - European Commission.
  11. "Framework for Responsible Research and Innovation in ICT - EPSRC website". epsrc.ukri.org. 18 October 2023.
  12. EPSRC. "Grant: UnBias: Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy". gow.epsrc.ukri.org.
  13. EPSRC. "Grant: ReEnTrust: Rebuilding and Enhancing Trust in Algorithms". gow.epsrc.ukri.org.
  14. EPSRC. "Grant: RoboTIPS: Developing Responsible Robots for the Digital Economy". gow.epsrc.ukri.org.
  15. "All-Party Parliamentary Group on Data Analytics launches landmark enquiry into data and technology ethics - Press Releases - Orbit RRI". 27 November 2018.
  16. "University of Oxford". www.facebook.com.
  17. "Marina Jirotka". The Institute of Physics blog.
  18. "The Ethical Hackathon encapsulates ethics and design challenge". Department of Computer Science.
  19. "School Newsletter 2018 - Projects - School of Anthropology & Museum Ethnography". www.anthro.ox.ac.uk.