Mariarosaria Taddeo is an Italian philosopher working on the ethics of digital technologies. She is Professor of Digital Ethics and Defence Technologies at the Oxford Internet Institute, University of Oxford and Dslt (Defence science and technology lab) Ethics Fellow at the Alan Turing Institute, London.
Taddeo holds a MA in philosophy from the University of Bari and PhD in philosophy from the University of Padua. Prior to joining the Oxford Internet Institute, she was research fellow in cybersecurity and ethics at the University of Warwick. She has also held a Marie Curie Fellowship at the University of Hertfordshire, exploring information warfare and its ethical
Her recent work focuses on the ethics and governance of digital technologies, and ranges from designing governance measures to leverage artificial intelligence (AI) to addressing the ethical challenges of using defence technology in defence, ethics of cybersecurity, and governance of cyber conflicts. She has published more than 150 articles in this area, focusing on topics like trustworthy digital technologies, governance of digital innovation, ethical governance of AI for national defence, ethics of cybersecurity (the complete list of her publications is available here [1] ). Her work has been published in major journals like Nature, Nature Machine Intelligence, Science, and Science Robotics.
Professor Taddeo has led, leads, and co-leads several projects in the area of Digital Ethics successfully. Most notably, she is the PI of a current project on the ‘Ethical Principles for the Use of AI for National Security and Defence’ funded by Dstl (the UK Defence Science and Technology Laboratory). She was Co-I in an EPSRC project, which funded the PETRAS IoT Research Hub. She was PI on a project funded by the NATO Cooperative Cyber Defence Centre of Excellence (CDD COE) to define ethical guidance for the regulation of cyber conflicts.
In September 2023 Taddeo was awarded a Title of Distinction of Professor of Digital Ethics and Defence Technologies by the University of Oxford. [2]
Mariarosaria Taddeo focuses on the philosophical and ethical dimensions of technology, particularly on digital technologies deployed for defence purposes. Her publications span from digital governance, the responsibilities of tech providers, privacy, transparency, and the socially beneficial uses of AI, including its potential to advance sustainability and the UN Sustainable Development Goals to the use of digital technologies of national security and defence purpose. She has published extensively on the cyber deterrence, the use of AI for intelligence analysis and include the moral permissibility of autonomous weapon systems, focusing on issues like the definition of such systems, the attribution of moral responsibility for their actions, and the application of traditional just war principles to the use of these systems.
One of her notable works discusses ethical frameworks for AI in national defence, highlighting the need for principles that ensure AI's use in defence aligns with moral and ethical standards. Taddeo and her colleagues have identified five key principles for the ethical deployment of AI in defence: justified and overridable uses of AI, ensuring just and transparent systems and processes, maintaining human moral responsibility, ensuring meaningful human control over AI systems, and the reliability of AI systems. These principles are aimed at fostering the ethically sound use of AI in national defence, addressing the growing global efforts to develop or acquire AI capabilities without equivalent efforts to define ethical guidelines.
Taddeo's work is integral to the ongoing dialogue on the ethical use of technology in defence, offering valuable insights into how nations can navigate the complex moral landscape presented by advancements in digital and AI technologies. Her contributions contribute to inform the academic debate and help shaping the ethical governance of emerging technologies in national security and defence contexts.
Since 2022, she serves on the Ethics Advisory Panel of the UK Ministry of Defence. She was one of lead experts on the CEPS Task Force on ‘Artificial Intelligence and Cybersecurity’, CEPS is a major European think-tank informing EU policies on cybersecurity. Between 2018 and 2020 represented the UK of the NATO Human Factors and Medicine Exploratory Team (NATO HFM ETS) ‘Operational Ethics: Preparation and Interventions for the Future Security Environment’. Since 2016, she serves as editor-in-chief of Minds & Machines (SpringerNature). [3] Between 2016 and 2018, she was the Oxford Fellow at the Future Council for Cybersecurity of the World Economic Forum, helping to identify the ethical and policy cybersecurity problems that could impair the development of future societies.
Articles
Edited books
She has received multiple awards for her work on Digital Ethics, among which:
Taddeo has published two books.
Videos see Taddeo's official paylist: https://www.youtube.com/playlist?list=PLbnuCxVthpDrvxNkYE17VqPcY-EUdlxJO
Podcasts see Guerre digitali: https://www.spreaker.com/podcast/guerre-digitali--574383
The ethics of technology is a sub-field of ethics addressing ethical questions specific to the technology age, the transitional shift in society wherein personal computers and subsequent devices provide for the quick and easy transfer of information. Technology ethics is the application of ethical thinking to growing concerns as new technologies continue to rise in prominence.
Computer ethics is a part of practical philosophy concerned with how computing professionals should make decisions regarding professional and social conduct.
Information ethics has been defined as "the branch of ethics that focuses on the relationship between the creation, organization, dissemination, and use of information, and the ethical standards and moral codes governing human conduct in society". It examines the morality that comes from information as a resource, a product, or as a target. It provides a critical framework for considering moral issues concerning informational privacy, moral agency, new environmental issues, problems arising from the life-cycle of information. It is very vital to understand that librarians, archivists, information professionals among others, really understand the importance of knowing how to disseminate proper information as well as being responsible with their actions when addressing information.
Luciano Floridi is an Italian and British philosopher. He is the director of the Digital Ethics Center at Yale University. He is also a Professor of Sociology of Culture and Communication at the University of Bologna, Department of Legal Studies, where he is the director of the Centre for Digital Ethics. Furthermore, he is adjunct professor at the Department of Economics, American University, Washington D.C. He is married to the neuroscientist Anna Christina Nobre.
The International Association for Computing and Philosophy (IACAP) is a professional, philosophical association emerging from a history of conferences that began in 1986. Adopting its mission from these conferences, the IACAP exists in order to promote scholarly dialogue on all aspects of the computational/informational turn and the use of computers in the service of philosophy.
Robot ethics, sometimes known as "roboethics", concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic, and how robots should be designed such that they act 'ethically'. Alternatively, roboethics refers specifically to the ethics of human behavior towards robots, as robots become increasingly advanced. Robot ethics is a sub-field of ethics of technology, specifically information technology, and it has close links to legal as well as socio-economic concerns. Researchers from diverse areas are beginning to tackle ethical questions about creating robotic technology and implementing it in societies, in a way that will still ensure the safety of the human race.
The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.
Cyberethics is "a branch of ethics concerned with behavior in an online environment". In another definition, it is the "exploration of the entire range of ethical and moral issues that arise in cyberspace" while cyberspace is understood to be "the electronic worlds made visible by the Internet." For years, various governments have enacted regulations while organizations have defined policies about cyberethics.
Value sensitive design (VSD) is a theoretically grounded approach to the design of technology that accounts for human values in a principled and comprehensive manner. VSD originated within the field of information systems design and human-computer interaction to address design issues within the fields by emphasizing the ethical values of direct and indirect stakeholders. It was developed by Batya Friedman and Peter Kahn at the University of Washington starting in the late 1980s and early 1990s. Later, in 2019, Batya Friedman and David Hendry wrote a book on this topic called "Value Sensitive Design: Shaping Technology with Moral Imagination". Value Sensitive Design takes human values into account in a well-defined matter throughout the whole process. Designs are developed using an investigation consisting of three phases: conceptual, empirical and technological. These investigations are intended to be iterative, allowing the designer to modify the design continuously.
Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. It should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with technology's grander social effects.
Lethal autonomous weapons (LAWs) are a type of autonomous military system that can independently search for and engage targets based on programmed constraints and descriptions. LAWs are also known as lethal autonomous weapon systems (LAWS), autonomous weapon systems (AWS), robotic weapons or killer robots. LAWs may operate in the air, on land, on water, underwater, or in space. The autonomy of systems as of 2018 was restricted in the sense that a human gives the final command to attack—though there are exceptions with certain "defensive" systems.
The cyber-arms industry are the markets and associated events surrounding the sale of software exploits, zero-days, cyberweaponry, surveillance technologies, and related tools for perpetrating cyberattacks. The term may extend to both grey and black markets online and offline.
Shannon Vallor is an American philosopher of technology. She is the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute. She previously taught at Santa Clara University in Santa Clara, California where she was the Regis and Dianne McKenna Professor of Philosophy and William J. Rewak, S.J. Professor at SCU.
Big data ethics, also known simply as data ethics, refers to systemizing, defending, and recommending concepts of right and wrong conduct in relation to data, in particular personal data. Since the dawn of the Internet the sheer quantity and quality of data has dramatically increased and is continuing to do so exponentially. Big data describes this large amount of data that is so voluminous and complex that traditional data processing application software is inadequate to deal with them. Recent innovations in medical research and healthcare, such as high-throughput genome sequencing, high-resolution imaging, electronic medical patient records and a plethora of internet-connected health devices have triggered a data deluge that will reach the exabyte range in the near future. Data ethics is of increasing relevance as the quantity of data increases because of the scale of the impact.
Joanna Joy Bryson is professor at Hertie School in Berlin. She works on Artificial Intelligence, ethics and collaborative cognition. She has been a British citizen since 2007.
Limor Shmerling Magazanik is a thought leader in digital technology policy, ethics and regulation. She is an expert in data governance, privacy, AI ethics and cybersecurity policy. Since November 2018, she has been the managing director of the Israel Tech Policy Institute (ITPI) and a senior fellow at the Future of Privacy Forum. She is a visiting scholar at the Duke University Sanford School of Public Policy. Previously, for 10 years, she was director at the Israeli Privacy Protection Authority and an adjunct lecturer at the Hebrew University of Jerusalem Faculty of Law and the Interdisciplinary Center Herzliya School of Law and a research advisor at the Milken Innovation Center. Her background also includes positions in the private sector, law firms and high-tech industry. She has promoted policy initiatives in various technology sectors and has been an advocate for compliance with data protection and privacy by design.
Maria Virgínia Ferreira de Almeida Júdice Gamito Dignum is a Professor of Computer Science at Umeå University, and an Associated Professor at Delft University of Technology. She leads the Social and Ethical Artificial Intelligence research group. Her research and writing considers responsible AI and the development evaluation of human-agent team work, thereby aligning with Human-Centered Artificial Intelligence themes.
Regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI). It is part of the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct enforcement power like the IEEE or the OECD.
Eftychia ("Effy") Vayena is a Greek and Swiss bioethicist. Since 2017 she has held the position of chair of bioethics at the Swiss Institute of Technology in Zurich, ETH Zurich. She is an elected member of the Swiss Academy of Medical Sciences.
Automated decision-making (ADM) involves the use of data, machines and algorithms to make decisions in a range of contexts, including public administration, business, health, education, law, employment, transport, media and entertainment, with varying degrees of human oversight or intervention. ADM involves large-scale data from a range of sources, such as databases, text, social media, sensors, images or speech, that is processed using various technologies including computer software, algorithms, machine learning, natural language processing, artificial intelligence, augmented intelligence and robotics. The increasing use of automated decision-making systems (ADMS) across a range of contexts presents many benefits and challenges to human society requiring consideration of the technical, legal, ethical, societal, educational, economic and health consequences.
{{cite book}}
: CS1 maint: multiple names: authors list (link){{cite book}}
: CS1 maint: multiple names: authors list (link)