Kanta Dihal | |
---|---|
Born | |
Nationality | Dutch |
Alma mater | Leiden University (B.A., B.A., M.A.) University of Oxford (Ph.D.) |
Known for | Artificial intelligence in fiction Science communication Artificial intelligence and AI ethics |
Scientific career | |
Fields | Science communication |
Institutions | Science Communication Unit, Imperial College London |
Thesis | The stories of quantum physics (2018) |
Doctoral advisor | Sally Shuttleworth Michael Whitworth |
Website | https://kantadihal.com/ |
Kanta Dihal is a Dutch research scientist who works at the intersection of artificial intelligence, science communication, literature, and ethics. She is currently a lecturer in science communication at Imperial College London. Dihal is co-editor of the books AI Narratives: A History of Imaginative Thinking About Intelligent Machines and Imagining AI: How the World Sees Intelligent Machines.
Dihal received a Bachelor of Arts in English and Language Culture in 2011, a Bachelor of Arts in Film and Literary Studies in 2012, and a Masters of Arts in Literary Studies in 2014 from Leiden University. [1] She completed her Ph.D. in Science Communication from the University of Oxford in 2018. [2] Her thesis, advised by Sally Shuttleworth and Michael Whitworth, explored the communication of conflicting interpretations of quantum physics to adults and children. [3]
Dihal's research intersects the fields of AI ethics, science communication, literature and science, and science fiction.
She is currently a lecturer in science communication at Imperial College London. [4] Prior to this, she worked as a senior research fellow at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge. She led two research projects there: Global AI Narratives [5] and Decolonizing AI.[ citation needed ] The Global AI Narratives project explores the public understanding of AI as constructed by fictional and nonfictional narratives, spanning ancient classics like the Iliad all the way to modern films like Steven Spielberg's AI . [6] [5] With her colleagues, she is attempting to document the ways in which AI is understood and developed around the world and their consequences on diversity and equality. [7] In her work for the Decolonizing AI project, Dihal examines how AI is portrayed in media, stock images, and dialect often with more "white" depictions and warns of the risk of creating a "homogeneous" workforce of technologists where people of colour are erased. [8]
Dihal is co-editor of the book AI Narratives: A History of Imaginative Thinking About Intelligent Machines, alongside Stephen Cave and Sarah Dillon. [9] The book is a collection of essays examining how narrative representations of AI have shaped technological development, understanding of humans, and the social and political orders that emerge from their relationships. The Times Literary Supplement remarked that this book is a “compelling collection shows how AI narratives have prompted critical reflection on human-machine relations”. [10]
Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.
The School of Informatics is an academic unit of the University of Edinburgh, in Scotland, responsible for research, teaching, outreach and commercialisation in informatics. It was created in 1998 from the former department of artificial intelligence, the Centre for Cognitive Science and the department of computer science, along with the Artificial Intelligence Applications Institute (AIAI) and the Human Communication Research Centre.
Robert Anthony Kowalski is an American-British logician and computer scientist, whose research is concerned with developing both human-oriented models of computing and computational models of human thinking. He has spent most of his career in the United Kingdom.
It started with the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.
The philosophy of artificial intelligence is a branch of the philosophy of mind and the philosophy of computer science that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence.
Artificial intelligence is a recurrent theme in science fiction, whether utopian, emphasising the potential benefits, or dystopian, emphasising the dangers.
The ethics of artificial intelligence is the branch of the ethics of technology specific to artificially intelligent systems. It is sometimes divided into a concern with the moral behavior of humans as they design, make, use and treat artificially intelligent systems, and a concern with the behavior of machines, in machine ethics.
David J. Gunkel is an American academic and Presidential Teaching Professor of Communication Studies at Northern Illinois University. He teaches courses in web design and programming, information and communication technology (ICT), and cyberculture. His research and publications examine the philosophical assumptions and ethical consequences of ICT.
Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. Machine ethics should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with the grander social effects of technology.
Gina Neff is the Executive Director of the Minderoo Centre for Technology and Democracy at the University of Cambridge. Neff was previously Professor of Technology & Society at the Oxford Internet Institute and the Department of Sociology at the University of Oxford. Neff is an organizational sociologist whose research explores the social and organizational impact of new communication technologies, with a focus on innovation, the digital transformation of industries, and how new technologies impact work.
Mark Coeckelbergh is a Belgian philosopher of technology. He is Professor of Philosophy of Media and Technology at the Department of Philosophy of the University of Vienna and former President of the Society for Philosophy and Technology. He was previously Professor of Technology and Social Responsibility at De Montfort University in Leicester, UK, Managing Director of the 3TU Centre for Ethics and Technology, and a member of the Philosophy Department of the University of Twente. Before moving to Austria, he has lived and worked in Belgium, the UK, and the Netherlands. He is the author of several books, including Growing Moral Relations (2012), Human Being @ Risk (2013), Environmental Skill (2015), Money Machines (2015), New Romantic Cyborgs (2017), Moved by Machines (2019), the textbook Introduction to Philosophy of Technology (2019), and AI Ethics (2020). He has written many articles and is an expert in ethics of artificial intelligence. He is best known for his work in philosophy of technology and ethics of robotics and artificial intelligence (AI), he has also published in the areas of moral philosophy and environmental philosophy.
The Leverhulme Centre for the Future of Intelligence (CFI) is an interdisciplinary research centre within the University of Cambridge that studies artificial intelligence. It is funded by the Leverhulme Trust.
Alison Adam is a British researcher in the field of Science and Technology Studies and is known for her work on gender in information systems and the history of forensic science. She is Professor Emerita of science, technology and society at Sheffield Hallam University, Sheffield, UK.
Murray Patrick Shanahan is a professor of Cognitive Robotics at Imperial College London, in the Department of Computing, and a senior scientist at DeepMind. He researches artificial intelligence, robotics, and cognitive science.
Joanna Joy Bryson is professor at Hertie School in Berlin. She works on Artificial Intelligence, ethics and collaborative cognition. She has been a British citizen since 2007.
Aimee van Wynsberghe is Alexander von Humboldt professor for "Applied Ethics of Artificial Intelligence" at the University of Bonn, Germany. As founder of the Bonn Sustainable AI Lab and director of the Institute for Science and Ethics, Aimee van Wynsberghe hosts every two years the Bonn Sustainable AI Conference.
The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union and in supra-national bodies like the IEEE, OECD and others. Since 2016, a wave of AI ethics guidelines have been published in order to maintain social control over the technology. Regulation is considered necessary to both encourage AI and manage associated risks. In addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks. Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.
Hanna Wallach is a computational social scientist and partner research manager at Microsoft Research. Her work makes use of machine learning models to study the dynamics of social processes. Her current research focuses on issues of fairness, accountability, transparency, and ethics as they relate to AI and machine learning.
Beth Victoria Lois Singler, born Beth Victoria White, is a British anthropologist specialising in artificial intelligence. She is known for her digital ethnographic research on the impact of apocalyptic stories on the conception of AI and robots, her comments on the societal implications of AI, as well as her public engagement work. The latter includes a series of four documentaries on whether robots could feel pain, human-robot companionship, AI ethics, and AI consciousness. She is currently the Junior Research Fellow in Artificial Intelligence at Homerton College, University of Cambridge.
Abeba Birhane is an Ethiopian-born cognitive scientist who works at the intersection of complex adaptive systems, machine learning, algorithmic bias, and critical race studies. Birhane's work with Vinay Prabhu uncovered that large-scale image datasets commonly used to develop AI systems, including ImageNet and 80 Million Tiny Images, carried racist and misogynistic labels and offensive images. She has been recognized by VentureBeat as a top innovator in computer vision and named as one of the 100 most influential persons in AI 2023 by TIME magazine.
{{cite book}}
: CS1 maint: location missing publisher (link)