Center for Security and Emerging Technology

Last updated
Center for Security and Emerging Technology
Formation2019;5 years ago (2019)
Type Think tank
PurposeTechnology & security
Headquarters Washington, D.C., U.S.
Founding Director
Jason Gaverick Matheny
Executive Director
Dewey Murdick
Parent organization
School of Foreign Service, Georgetown University
Website cset.georgetown.edu

The Center for Security and Emerging Technology (CSET) is a think tank dedicated to policy analysis at the intersection of national and international security and emerging technologies, based at Georgetown University's School of Foreign Service. CSET's founding director is the former director of the Intelligence Advanced Research Projects Activity, Jason Gaverick Matheny. [1] Its current executive director is Dewey Murdick, former Chief Analytics Officer and Deputy Chief Scientist within the Department of Homeland Security. [2]

Contents

Established in January 2019, CSET has received more than $57,000,000 in funding from the Open Philanthropy Project, [3] the William and Flora Hewlett Foundation, [4] and the Public Interest Technology University Network. CSET has faced criticism over it's ties to the effective altruist movement. [5]

Its mission is to study the security impacts of emerging technologies, support academic work in security and technology studies, and deliver nonpartisan analysis to the policy community. [6] For its first two years, CSET plans to focus on the intersection of security and artificial intelligence (AI), particularly on national competitiveness, talent and knowledge flows and relationships with other technologies. [7] CSET is the largest center in the U.S. focused on AI and policy. [8]

Public events

In September 2019, CSET co-hosted the George T. Kalaris Intelligence Conference, which featured speakers from academia, the U.S. government and the private sector. [9]

Publications

CSET produces a biweekly newsletter, policy.ai. It has published research on various aspects of the intersection between artificial intelligence and security, including changes to the U.S. AI workforce, immigration laws' effect on the AI sector, and technology transfer overseas. Its research output includes policy briefs and longer published reports. [10]

A study published in January 2023 by CSET, OpenAI, and the Stanford Internet Observatory and covered by Forbes cited that "There are also possible negative applications of generative language models, or ‘language models’ for short. For malicious actors looking to spread propaganda—information designed to shape perceptions to further an actor’s interest—these language models bring the promise of automating the creation of convincing and misleading text for use in influence operations, rather than having to rely on human labor." [11]

In May 2023, Chinese officials announced that they would be closing some of the access that foreign countries had into their public information as of a result of studies from think tanks like CSET, citing concerns about cooperation between the U.S. military and the private sector. [12]

Related Research Articles

<span class="mw-page-title-main">Center for Strategic and International Studies</span> American think tank in Washington, D.C.

The Center for Strategic and International Studies (CSIS) is an American think tank based in Washington, D.C. From its founding in 1962 until 1987, it was an affiliate of Georgetown University, initially named the Center for Strategic and International Studies of Georgetown University. The center conducts policy studies and strategic analyses of political, economic and security issues throughout the world, with a focus on issues concerning international relations, trade, technology, finance, energy and geostrategy.

The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence. MIRI's work has focused on a friendly AI approach to system design and on predicting the rate of technology development.

<span class="mw-page-title-main">Jason Gaverick Matheny</span> American national security expert

Jason Gaverick Matheny is a United States national security expert serving as president and CEO of the RAND Corporation since July 2022. He was previously a senior appointee in the Biden administration from March 2021 to June 2022. He served as deputy assistant to the president for technology and national security, deputy director for national security in the White House Office of Science and Technology Policy and coordinator for technology and national security at the White House National Security Council.

GiveWell is an American non-profit charity assessment and effective altruism-focused organization. GiveWell focuses primarily on the cost-effectiveness of the organizations that it evaluates, rather than traditional metrics such as the percentage of the organization's budget that is spent on overhead.

<span class="mw-page-title-main">Global catastrophic risk</span> Potentially harmful worldwide events

A global catastrophic risk or a doomsday scenario is a hypothetical event that could damage human well-being on a global scale, even endangering or destroying modern civilization. An event that could cause human extinction or permanently and drastically curtail humanity's existence or potential is known as an "existential risk."

<span class="mw-page-title-main">Daniel Byman</span> American university professor

Daniel L. Byman is one of the world's leading researchers on terrorism, Counterterrorism and the Middle East. Dr. Byman is a professor in Georgetown University's Walsh School of Foreign Service and Director of Georgetown's Security Studies Program He is a former Vice-Dean of the school.

The Centre for the Study of Existential Risk (CSER) is a research centre at the University of Cambridge, intended to study possible extinction-level threats posed by present or future technology. The co-founders of the centre are Huw Price, Martin Rees and Jaan Tallinn.

A military artificial intelligence arms race is an arms race between two or more states to develop and deploy lethal autonomous weapons systems (LAWS). Since the mid-2010s, many analysts have noted the emergence of such an arms race between global superpowers for better military AI, driven by increasing geopolitical and military tensions.

The artificial intelligence (AI) industry in China is a rapidly developing multi-billion dollar industry. The roots of China's AI development started in the late 1970s following Deng Xiaoping's economic reforms emphasizing science and technology as the country's primary productive force.

Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning. For the subset of AI algorithms, the term regulation of artificial intelligence is used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally, including in the European Union. Regulation of AI is considered necessary to both encourage AI and manage associated risks, but challenging. Another emerging topic is the regulation of blockchain algorithms and is mentioned along with regulation of AI algorithms. Many countries have enacted regulations of high frequency trades, which is shifting due to technological progress into the realm of AI algorithms.

The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union and in supra-national bodies like the IEEE, OECD and others. Since 2016, a wave of AI ethics guidelines have been published in order to maintain social control over the technology. Regulation is considered necessary to both encourage AI and manage associated risks. In addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks. Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.

Mariarosaria Taddeo is an Italian philosopher working on the ethics of digital technologies. She is Professor of Digital Ethics and Defence Technologies at the Oxford Internet Institute, University of Oxford and Dslt Ethics Fellow at the Alan Turing Institute, London.

<span class="mw-page-title-main">Anja Kaspersen</span> Norwegian diplomat and academic

Anja Kaspersen is a director for Global Markets Development, New Frontiers and Emerging Spaces at IEEE, the world's largest technical professional organisation. Kaspersen is also a senior fellow at Carnegie Council for Ethics in International Affairs where she co-directs the Artificial Intelligence Equality Initiative with Wendell Wallach. With scholars and thinkers in the field of technology governance, supported by Carnegie Council for Ethics in International Affairs and IEEE, Kaspersen and Wallach provided a Proposal for International governance of AI.

<span class="mw-page-title-main">Abishur Prakash</span> Canadian futurist and author (born 1991)

Abishur Prakash is a Canadian businessman, author, and geopolitical expert. He is the chief executive officer and founder of The Geopolitical Business, an advisory firm based in Toronto, Canada. Prior to this, he worked as a futurist at Center for Innovating the Future, a foresight agency.

<span class="mw-page-title-main">National Security Commission on Artificial Intelligence</span> US independent commission

The National Security Commission on Artificial Intelligence (NSCAI) was an independent commission of the United States of America established in 2018 to make recommendations to the President and Congress to "advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States".

<span class="mw-page-title-main">James Manyika</span> Zimbabwean-American consultant, researcher and writer

James M. Manyika is a Zimbabwean-American academic, consultant, and business executive. He is known for his research and scholarship into the intersection of technology and the economy, including artificial intelligence, robotics automation, and the future of work. He is Google's first Senior Vice President of Technology and Society, reporting directly to Google CEO Sundar Pichai. He focuses on "shaping and sharing" the company's view on the way tech affects society, the economy, and the planet. In April 2023, his role was expanded to Senior Vice President for Research, Technology & Society and includes overseeing Google Research and Google Labs and focusing more broadly on helping advance Google’s most ambitious innovations in AI, Computing and Science responsibly. He is also Chairman Emeritus of the McKinsey Global Institute.

The Special Competitive Studies Project (SCSP) is a non-partisan U.S. think tank and private foundation focused on technology and security. Founded by former Google CEO Eric Schmidt in October 2021, SCSP's stated mission is to "make recommendations to strengthen America’s long-term competitiveness as artificial intelligence (AI) and other emerging technologies are reshaping our national security, economy, and society." It seeks to ensure that "America is positioned and organized to win the techno-economic competition between now and 2030."

<span class="mw-page-title-main">Tarun Chhabra</span> American lawyer

Tarun Chhabra is an American lawyer and security analyst current serving as Senior Director for Technology and National Security at the United States National Security Council in the Biden administration.

Helen Toner is an Australian researcher and former board member of OpenAI.

Anna B. Puglisi is an American security analyst currently serving as biotechnology program director and a senior fellow at Georgetown University's Center for Security and Emerging Technology (CSET). She is also a member of the Center for a New American Security's BioTech Task Force. She was the U.S. National Counterintelligence Officer for East Asia between 2019 and 2020 in the National Counterintelligence and Security Center (NCSC).

References

  1. Anderson, Nick (February 28, 2019). "Georgetown launches think tank on security and emerging technology". The Washington Post . Retrieved June 30, 2019.
  2. "Dewey Murdick". Center for Security and Emerging Technology. Retrieved 2023-06-08.
  3. "Georgetown University — Center for Security and Emerging Technology". Open Philanthropy Project. January 2019. Retrieved June 30, 2019.
  4. "Hewlett Foundation". October 8, 2019. Retrieved December 8, 2019.
  5. Bordelon, Brendan (October 13, 2023). "How a billionaire-backed network of AI advisers took over Washington". Politico.
  6. "About Us". Center for Security and Emerging Technology. January 2019. Retrieved June 30, 2019.
  7. "Georgetown launches new $55 million center on security & emerging technology". Institute for Technology, Law and Policy. February 28, 2019. Archived from the original on June 30, 2019. Retrieved June 30, 2019.
  8. "Q&A with Jason Matheny, Founding Director of the Center for Security and Emerging Technology". Georgetown University. January 2019. Retrieved June 30, 2019.
  9. "Georgetown University". September 2019. Retrieved December 8, 2019.
  10. "Georgetown University". September 2019. Retrieved December 8, 2019.
  11. Vigdor, Dan. "Council Post: How Could Artificial Intelligence Impact Cybersecurity?". Forbes. Retrieved 2023-09-06.
  12. "China limits overseas access to data - Taipei Times". www.taipeitimes.com. 2023-05-09. Retrieved 2023-09-06.