Center for Security and Emerging Technology

Last updated
Center for Security and Emerging Technology
Formation2019;5 years ago (2019)
Type Think tank
PurposeTechnology & security
Headquarters Washington, D.C., U.S.
Founding Director
Jason Gaverick Matheny
Executive Director
Dewey Murdick
Parent organization
School of Foreign Service, Georgetown University
Website cset.georgetown.edu OOjs UI icon edit-ltr-progressive.svg

The Center for Security and Emerging Technology (CSET) is a think tank dedicated to policy analysis at the intersection of national and international security and emerging technologies, based at Georgetown University's School of Foreign Service.

Contents

Its mission is to study the security impacts of emerging technologies by analyzing data, supporting academic work in security and technology studies, and delivering nonpartisan analysis to the policy community. [1] CSET focuses particularly on the intersection of security and artificial intelligence (AI). [2] It addresses topics such as national competitiveness, [3] opportunities related to AI, [4] talent and knowledge flows, [5] AI safety assessments, [6] and AI applications in biotechnology [7] and computer security. [8]

CSET's founding director, Jason Gaverick Matheny, previously served as the director of the Intelligence Advanced Research Projects Activity. [9] Its current executive director is Dewey Murdick, former Chief Analytics Officer and Deputy Chief Scientist within the Department of Homeland Security. [10]

Established in January 2019, CSET has received more than $57,000,000 in funding from the Open Philanthropy Project, [11] the William and Flora Hewlett Foundation, [12] and the Public Interest Technology University Network. CSET has faced criticism over its ties to the effective altruism movement. [13]

Publications

CSET produces a biweekly newsletter, policy.ai. [14] It has published research on various aspects of the intersection between artificial intelligence and security, including changes to the U.S. AI workforce, [15] immigration laws' effect on the AI sector, [16] and technology transfer overseas. [17] Its research output includes policy briefs and longer published reports. [18]

A study [19] published in January 2023 by CSET, OpenAI, and the Stanford Internet Observatory and covered by Forbes cited that "There are also possible negative applications of generative language models, or 'language models' for short. For malicious actors looking to spread propaganda—information designed to shape perceptions to further an actor’s interest—these language models bring the promise of automating the creation of convincing and misleading text for use in influence operations, rather than having to rely on human labor." [20]

In May 2023, Chinese officials announced that they would be closing some of the access that foreign countries had into their public information as a result of studies from think tanks like CSET, citing concerns about cooperation between the U.S. military and the private sector. [21]

In a September 2024 testimony before the United States House Select Committee on Strategic Competition between the United States and the Chinese Communist Party, former CSET employee Anna B. Puglisi stated that she received legal threats of libel from BGI Group for a report she wrote while serving at CSET and had been refused legal indemnity from Georgetown University for the report. [22] Following the testimony, a Georgetown University representative stated that it "stand[s] fully behind the report" and is "prepared to defend the report and its authors should the letters lead to formal legal action." [22]

Related Research Articles

Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.

<span class="mw-page-title-main">Jason Gaverick Matheny</span> American national security expert

Jason Gaverick Matheny is a United States national security expert serving as president and CEO of the RAND Corporation since July 2022. He was previously a senior appointee in the Biden administration from March 2021 to June 2022. He served as deputy assistant to the president for technology and national security, deputy director for national security in the White House Office of Science and Technology Policy and coordinator for technology and national security at the White House National Security Council.

The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.

Artificial intelligence (AI) has been used in applications throughout industry and academia. In a manner analogous to electricity or computers, AI serves as a general-purpose technology that has numerous applications, including language translation, image recognition, decision-making, credit scoring and e-commerce. AI includes the development of machines which can perceive, understand, act and learn a scientific discipline.

Music and artificial intelligence (AI) is the development of music software programs which use AI to generate music. As with applications in other fields, AI in music also simulates mental tasks. A prominent feature is the capability of an AI algorithm to learn based on past data, such as in computer accompaniment technology, wherein the AI is capable of listening to a human performer and performing accompaniment. Artificial intelligence also drives interactive composition technology, wherein a computer composes music in response to a live performance. There are other AI applications in music that cover not only music composition, production, and performance but also how music is marketed and consumed. Several music player programs have also been developed to use voice recognition and natural language processing technology for music voice control. Current research includes the application of AI in music composition, performance, theory and digital sound processing.

iFlytek Chinese technology company

iFlytek, styled as iFLYTEK, is a partially state-owned Chinese information technology company established in 1999. It creates voice recognition software and 10+ voice-based internet/mobile products covering education, communication, music, intelligent toys industries. State-owned enterprise China Mobile is the company's largest shareholder. The company is listed in the Shenzhen Stock Exchange and it is backed by several state-owned investment funds.

A military artificial intelligence arms race is an arms race between two or more states to develop and deploy lethal autonomous weapons systems (LAWS). Since the mid-2010s, many analysts have noted the emergence of such an arms race between superpowers for better military AI, driven by increasing geopolitical and military tensions.

The artificial intelligenceindustry in China is a rapidly developing multi-billion dollar industry. The roots of China's AI development started in the late 1970s following Deng Xiaoping's economic reforms emphasizing science and technology as the country's primary productive force.

<span class="mw-page-title-main">Terah Lyons</span> American technology policy scholar

Terah Lyons is known for her work in the field of artificial intelligence and technology policy. Lyons was the executive director of the Partnership on AI and was a policy advisor to the United States Chief Technology Officer Megan Smith in President Barack Obama's Office of Science and Technology Policy.

Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning. For the subset of AI algorithms, the term regulation of artificial intelligence is used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally, including in the European Union. Regulation of AI is considered necessary to both encourage AI and manage associated risks, but challenging. Another emerging topic is the regulation of blockchain algorithms and is mentioned along with regulation of AI algorithms. Many countries have enacted regulations of high frequency trades, which is shifting due to technological progress into the realm of AI algorithms.

Regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI). It is part of the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct enforcement power like the IEEE or the OECD.

<span class="mw-page-title-main">G42 (company)</span> Artificial Intelligence company

Group 42 Holding Ltd, doing business as G42, is an Emirati artificial intelligence (AI) development holding company based in Abu Dhabi, founded in 2018. The organization is focused on AI development across various industries including government, healthcare, finance, oil and gas, aviation, and hospitality. Tahnoun bin Zayed Al Nahyan, UAEs national security advisor is the controlling shareholder and chairs the company. Because G42 had strong ties to China, U.S. authorities have been concerned that G42 serves as a channel through which sophisticated U.S. technology is diverted to Chinese companies or the government. As of February 2024, G42 divested its stakes in China.

<span class="mw-page-title-main">Meta AI</span> Artificial intelligence division of Meta Platforms

Meta AI is an American company owned by Meta that develops artificial intelligence and augmented and artificial reality technologies. Meta AI deems itself an academic research laboratory, focused on generating knowledge for the AI community, and should not be confused with Meta's Applied Machine Learning (AML) team, which focuses on the practical applications of its products.

AI safety is an interdisciplinary field focused on preventing accidents, misuse, or other harmful consequences arising from artificial intelligence (AI) systems. It encompasses machine ethics and AI alignment, which aim to ensure AI systems are moral and beneficial, as well as monitoring AI systems for risks and enhancing their reliability. The field is particularly concerned with existential risks posed by advanced AI models.

<span class="mw-page-title-main">AI boom</span> Ongoing period of rapid progress in artificial intelligence

The AI boom, or AI spring, is an ongoing period of rapid progress in the field of artificial intelligence (AI) that started in the late 2010s before gaining international prominence in the early 2020s. Examples include protein folding prediction led by Google DeepMind and generative AI applications developed by OpenAI.

The Special Competitive Studies Project (SCSP) is a non-partisan U.S. think tank and private foundation focused on technology and security. Founded by former Google CEO Eric Schmidt in October 2021, SCSP's stated mission is to "make recommendations to strengthen America’s long-term competitiveness as artificial intelligence (AI) and other emerging technologies are reshaping our national security, economy, and society." It seeks to ensure that "America is positioned and organized to win the techno-economic competition between now and 2030."

<span class="mw-page-title-main">Tarun Chhabra</span> American lawyer and security analyst

Tarun Chhabra is an American lawyer and security analyst current serving as Senior Director for Technology and National Security at the United States National Security Council in the Biden administration.

Helen Toner is an Australian researcher, and the director of strategy at Georgetown’s Center for Security and Emerging Technology. She was a board member of OpenAI when CEO Sam Altman was fired.

Anna B. Puglisi is an American security analyst who a visiting fellow at the Hoover Institution. She previously served as biotechnology program director and a senior fellow at Georgetown University's Center for Security and Emerging Technology (CSET). She is also a member of the Center for a New American Security's BioTech Task Force. She was the U.S. National Counterintelligence Officer for East Asia between 2019 and 2020 in the National Counterintelligence and Security Center (NCSC).

Artificial intelligence or Ai is a broad “skewer” term that has specific areas of study clustered next to it, including machine learning, natural language processing, the philosophy of artificial intelligence, autonomous robots and TESCREAL.Ai in education (aied) also has a variety of areas of research, skewered together. Including anthropomorphism, generative artificial intelligence, data-driven decision-making, ai ethics, classroom surveillance, data-privacy and Ai Literacy.

References

  1. "About Us". Center for Security and Emerging Technology. January 2019. Archived from the original on March 25, 2019. Retrieved June 30, 2019.
  2. "Georgetown launches new $55 million center on security & emerging technology". Institute for Technology, Law and Policy. February 28, 2019. Archived from the original on June 30, 2019. Retrieved June 30, 2019.
  3. "Compete". Center for Security and Emerging Technology. Retrieved 2024-09-03.
  4. "Applications". Center for Security and Emerging Technology. Retrieved 2024-09-03.
  5. "Workforce". Center for Security and Emerging Technology. Retrieved 2024-09-03.
  6. "Assessment". Center for Security and Emerging Technology. Archived from the original on 2024-09-23. Retrieved 2024-09-03.
  7. "Bio-Risk". Center for Security and Emerging Technology. Archived from the original on 2024-09-23. Retrieved 2024-09-03.
  8. "CyberAI". Center for Security and Emerging Technology. Archived from the original on 2024-09-23. Retrieved 2024-09-03.
  9. Anderson, Nick (February 28, 2019). "Georgetown launches think tank on security and emerging technology". The Washington Post . Archived from the original on May 14, 2019. Retrieved June 30, 2019.
  10. "Dewey Murdick". Center for Security and Emerging Technology. Archived from the original on 2024-09-23. Retrieved 2023-06-08.
  11. "Georgetown University — Center for Security and Emerging Technology". Open Philanthropy Project. January 2019. Archived from the original on June 29, 2019. Retrieved June 30, 2019.
  12. "Hewlett Foundation". October 8, 2019. Archived from the original on September 23, 2024. Retrieved December 8, 2019.
  13. Bordelon, Brendan (October 13, 2023). "How a billionaire-backed network of AI advisers took over Washington". Politico. Archived from the original on October 13, 2023. Retrieved May 12, 2024.
  14. "Newsletters". Center for Security and Emerging Technology. Archived from the original on 2024-09-23. Retrieved 2024-09-03.
  15. "U.S. AI Workforce". Center for Security and Emerging Technology. April 2021. Archived from the original on 2024-09-23. Retrieved 2024-09-03.
  16. "Immigration Policy and the U.S. AI Sector" (PDF). Center for Security and Emerging Technology. September 2019. Archived (PDF) from the original on 2019-12-08. Retrieved 2020-01-05.
  17. "China's Access to Foreign AI Technology" (PDF). Center for Security and Emerging Technology. September 2019. Archived (PDF) from the original on 2020-07-31. Retrieved 2020-01-05.
  18. "Georgetown University". September 2019. Archived from the original on December 8, 2019. Retrieved December 8, 2019.
  19. Goldstein, Josh A.; Sastry, Girish; Musser, Micah; DiResta, Renee; Gentzel, Matthew; Sedova, Katerina (January 2023). "Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations" (PDF). cdn.openai.com. arXiv: 2301.04246 . Archived (PDF) from the original on 2024-09-23. Retrieved 2023-09-06.
  20. Vigdor, Dan. "Council Post: How Could Artificial Intelligence Impact Cybersecurity?". Forbes. Archived from the original on 2024-09-23. Retrieved 2023-09-06.
  21. "China limits overseas access to data". Taipei Times . Bloomberg News. 2023-05-09. Archived from the original on 2024-09-23. Retrieved 2023-09-06.
  22. 1 2 Quinn, Jimmy (2024-09-23). "Ex-Georgetown Researcher Claims School Has Withheld Support amid Chinese Biotech Firm's Threats". National Review . Archived from the original on 2024-09-23. Retrieved 2024-09-23.