Abbreviation | IPIE |
---|---|
Established | December 2023 (1 year ago) |
Types | Scientific body, non-governmental organization |
Headquarters | Zurich, Switzerland |
Website | www |
The International Panel on the Information Environment (IPIE) is an international consortium of over 250 experts [1] from 55 countries dedicated to providing actionable scientific knowledge on threats to the global information environment. The organization has been compared with the Intergovernmental Panel on Climate Change, but also CERN and the IAEA, because it uses the model of scientific panels and neutral assessments to identify points of consensus or gaps in knowledge. [2] The IPIE was legally registered as a charitable entity in the Canton of Zurich, Switzerland in 2023. [3]
The first panel was a Scientific Panel on Global Standards on AI Auditing, chaired by Professor Wendy Chun and Professor Alondra Nelson. At the UN Summit of the Future in September 2024 the IPIE announced the formation of a Scientific Panel on Information Integrity about Climate Science, a Scientific Panel on Child Protection and Social Media, and a Scientific Panel on AI and Peacebuilding. [4] [5]
The concept was proposed in 2021 during the first Nobel Prize Summit organized by the US National Academy of Sciences and the Nobel Foundation, involving Dr. Sheldon Himelfarb, then head of PeaceTech Lab, [6] and Professor Philip N Howard, a Professor at Oxford University and then Director of the Oxford Internet Institute, [7] In September 2022 thirty scientists met at Oxford University to develop a mission statement, organizational structure, and process for developing scientific consensus. This chartering group included researchers from the social, behavioral and computer sciences. Over time, similar calls to create this independent body have come from public science agencies, civil society, philanthropy, and the technology firms themselves. Some proposals focussed exclusively on AI, others on a host of technology-related harms, but there has been strong consensus that the body would need financial independence from technology firms and governments, wouldn't be credibly managed by a steering committee of nation states, and wouldn't function effectively within the UN system. [8] A larger group of scientists convened in Costa Rica in February 2023 to continue planning.
In May 2023 the IPIE was publicly launched during the Nobel Prize Summit in Washington DC. [9] The Panel's inaugural announcement said,
Algorithmic bias, manipulation and misinformation has become a global and existential threat that exacerbates existing social problems, degrades public life, cripples humanitarian initiatives and prevents progress on other serious threats.
A New York Times report on the Panel's launch described its initial plans to "issue regular reports, not fact-checking individual falsehoods but rather looking for deeper forces behind the spread of disinformation as a way to guide government policy." [10]
The CEO of IPIE is Dr. Philip N. Howard, [11] who is also the director of Oxford University's Programme on Democracy and Technology. [12] Jenny Woods is the Executive Director and COO of the IPIE, which has a Secretariat based out of Zuirch. [13] The organization is governed by a small Board of Trustees, a system of permanent methodology, ethics and membership committees, and limited-term scientific panels on particular topics. Dr. Sheldon Himelfarb is co-founder and chair of the IPIE Board of Trustees. [14]
The organization is neutral and nonpartisan, but does seek better access to data from technology companies so as to better appraise the impact of new technologies like AI on public life. [15]
Disinformation is misleading content deliberately spread to deceive people, or to secure economic or political gain and which may cause public harm. Disinformation is an orchestrated adversarial activity in which actors employ strategic deceptions and media manipulation tactics to advance political, military, or commercial goals. Disinformation is implemented through attacks that "weaponize multiple rhetorical strategies and forms of knowing—including not only falsehoods but also truths, half-truths, and value judgements—to exploit and amplify culture wars and other identity-driven controversies."
Geoffrey Everest Hinton is a British-Canadian computer scientist, cognitive scientist, cognitive psychologist, known for his work on artificial neural networks which earned him the title as the "Godfather of AI".
John Joseph Hopfield is an American physicist and emeritus professor of Princeton University, most widely known for his study of associative neural networks in 1982. He is known for the development of the Hopfield network. Previous to its invention, research in artificial intelligence (AI) was in a decay period or AI winter, Hopfield work revitalized large scale interest in this field.
Misinformation is incorrect or misleading information. Misinformation can exist without specific malicious intent; disinformation is distinct in that it is deliberately deceptive and propagated. Misinformation can include inaccurate, incomplete, misleading, or false information as well as selective or half-truths. In January 2024, the World Economic Forum identified misinformation and disinformation, propagated by both internal and external interests, to "widen societal and political divides" as the most severe global risks within the next two years.
Sir Demis Hassabis is a British artificial intelligence (AI) researcher, and entrepreneur. He is the chief executive officer and co-founder of Google DeepMind, and Isomorphic Labs, and a UK Government AI Adviser. In 2024, Hassabis and John M. Jumper were jointly awarded the Nobel Prize in Chemistry for their AI research contributions for protein structure prediction.
Simon H. Johnson is a Yorkshire economist who has served as the Ronald A. Kurtz Professor of Entrepreneurship at the MIT Sloan School of Management since 2004. He also served as a senior fellow at the Peterson Institute for International Economics from 2008 to 2019. Before moving to MIT, he taught at Duke University's Fuqua School of Business from 1991 to 1997. From March 2007 through the end of August 2008, he served as Chief Economist of the International Monetary Fund.
Yann André LeCun is a French-American computer scientist working primarily in the fields of machine learning, computer vision, mobile robotics and computational neuroscience. He is the Silver Professor of the Courant Institute of Mathematical Sciences at New York University and Vice President, Chief AI Scientist at Meta.
The Council of Canadian Academies (CCA) was created to perform independent, expert assessments of the science that is relevant to important public issues.
Philip N. Howard is a sociologist and communication researcher who studies the impact of information technologies on democracy and social inequality. He studies how new information technologies are used in both civic engagement and social control in countries around the world. He is Professor of Internet Studies at the Oxford Internet Institute and Balliol College at the University of Oxford. He was Director of the Oxford Internet Institute from March 2018 to March 26, 2021. He is the author of ten books, including New Media Campaigns and The Managed Citizen, The Digital Origins of Dictatorship and Democracy, and Pax Technica: How the Internet of Things May Set Us Free or Lock Us Up. His latest book is Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations, and Political Operatives.
Yoshua Bengio is a Canadian computer scientist, most noted for his work on artificial neural networks and deep learning. He is a professor at the Department of Computer Science and Operations Research at the Université de Montréal and scientific director of the Montreal Institute for Learning Algorithms (MILA).
Meredith Whittaker is the president of the Signal Foundation and serves on its board of directors. She was formerly the Minderoo Research Professor at New York University (NYU), and the co-founder and faculty director of the AI Now Institute. She also served as a senior advisor on AI to Chair Lina Khan at the Federal Trade Commission. Whittaker was employed at Google for 13 years, where she founded Google's Open Research group and co-founded the M-Lab. In 2018, she was a core organizer of the Google Walkouts and resigned from the company in July 2019.
Disinformation attacks are strategic deception campaigns involving media manipulation and internet manipulation, to disseminate misleading information, aiming to confuse, paralyze, and polarize an audience. Disinformation can be considered an attack when it occurs as an adversarial narrative campaign that weaponizes multiple rhetorical strategies and forms of knowing—including not only falsehoods but also truths, half-truths, and value-laden judgements—to exploit and amplify identity-driven controversies. Disinformation attacks use media manipulation to target broadcast media like state-sponsored TV channels and radios. Due to the increasing use of internet manipulation on social media, they can be considered a cyber threat. Digital tools such as bots, algorithms, and AI technology, along with human agents including influencers, spread and amplify disinformation to micro-target populations on online platforms like Instagram, Twitter, Google, Facebook, and YouTube.
Robert Wallace Malone is an American physician and biochemist. His early work focused on mRNA technology, pharmaceuticals, and drug repurposing research. During the COVID-19 pandemic, Malone promoted misinformation about the safety and efficacy of COVID-19 vaccines.
John Michael Jumper is an American chemist and computer scientist. He currently serves as director at Google DeepMind. Jumper and his colleagues created AlphaFold, an artificial intelligence (AI) model to predict protein structures from their amino acid sequence with high accuracy. Jumper stated that the AlphaFold team plans to release 100 million protein structures.
Logically is a British multinational technology startup company that specializes in analyzing and fighting disinformation. Logically was founded in 2017 by Lyric Jain and is based in Brighouse, England, with offices in London, Mysore, Bangalore, and Virginia.
This timeline includes entries on the spread of COVID-19 misinformation and conspiracy theories related to the COVID-19 pandemic in Canada. This includes investigations into the origin of COVID-19, and the prevention and treatment of COVID-19 which is caused by the virus SARS-CoV-2. Social media apps and platforms, including Facebook, TikTok, Telegram, and YouTube, have contributed to the spread of misinformation. The Canadian Anti-Hate Network (CAHN) reported that conspiracy theories related to COVID-19 began on "day one". CAHN reported on March 16, 2020, that far-right groups in Canada were taking advantage of the climate of anxiety and fear surrounding COVID, to recycle variations of conspiracies from the 1990s, that people had shared over shortwave radio. COVID-19 disinformation is intentional and seeks to create uncertainty and confusion. But most of the misinformation is shared online unintentionally by enthusiastic participants who are politically active.
Ryan Calo is an American legal scholar, internationally recognized within the fields of emerging technology, especially privacy, robotics, and artificial intelligence. He is a co-founder of the University of Washington Tech Policy Lab and the Center for an Informed Public which focuses on combating misinformation.
Emilio Ferrara is an Italian-American computer scientist, researcher, and professor in the field of data science and social networks. As of 2022, he serves as a Full Professor at the University of Southern California (USC), in the Viterbi School of Engineering and USC Annenberg School for Communication, where he conducts research on computational social science, network science, and machine learning. Ferrara is known for his work in the detection of social bots and the analysis of misinformation on social media platforms.
Kai Shu is a computer scientist, academic, and author. He is an assistant professor at Emory University.
Disinformation is "false information that is purposely spread to deceive people". Misinformation is information that is false or misleading, that contradicts consensus by experts in the field or by the "best available evidence".
{{cite journal}}
: CS1 maint: multiple names: authors list (link)