Center for Security and Emerging Technology

Last updated
Center for Security and Emerging Technology
Formation2019;5 years ago (2019)
Type Think tank
PurposeTechnology & security
Headquarters Washington, D.C., U.S.
Founding Director
Jason Gaverick Matheny
Executive Director
Dewey Murdick
Parent organization
School of Foreign Service, Georgetown University
Website cset.georgetown.edu

The Center for Security and Emerging Technology (CSET) is a think tank dedicated to policy analysis at the intersection of national and international security and emerging technologies, based at Georgetown University's School of Foreign Service.

Contents

Its mission is to study the security impacts of emerging technologies by analyzing data, supporting academic work in security and technology studies, and delivering nonpartisan analysis to the policy community. [1] CSET focuses particularly on the intersection of security and artificial intelligence (AI). [2] It addresses topics such as national competitiveness, [3] opportunities related to AI, [4] talent and knowledge flows, [5] AI safety assessments, [6] and AI applications in biotechnology [7] and computer security. [8]

CSET's founding director, Jason Gaverick Matheny, previously served as the director of the Intelligence Advanced Research Projects Activity. [9] Its current executive director is Dewey Murdick, former Chief Analytics Officer and Deputy Chief Scientist within the Department of Homeland Security. [10]

Established in January 2019, CSET has received more than $57,000,000 in funding from the Open Philanthropy Project, [11] the William and Flora Hewlett Foundation, [12] and the Public Interest Technology University Network. CSET has faced criticism over its ties to the effective altruism movement. [13]

Publications

CSET produces a biweekly newsletter, policy.ai. [14] It has published research on various aspects of the intersection between artificial intelligence and security, including changes to the U.S. AI workforce, [15] immigration laws' effect on the AI sector, [16] and technology transfer overseas. [17] Its research output includes policy briefs and longer published reports. [18]

A study [19] published in January 2023 by CSET, OpenAI, and the Stanford Internet Observatory and covered by Forbes cited that "There are also possible negative applications of generative language models, or 'language models' for short. For malicious actors looking to spread propaganda—information designed to shape perceptions to further an actor’s interest—these language models bring the promise of automating the creation of convincing and misleading text for use in influence operations, rather than having to rely on human labor." [20]

In May 2023, Chinese officials announced that they would be closing some of the access that foreign countries had into their public information as a result of studies from think tanks like CSET, citing concerns about cooperation between the U.S. military and the private sector. [21]

Related Research Articles

Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.

The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.

Artificial intelligence (AI) has been used in applications throughout industry and academia. In a manner analogous to electricity or computers, AI serves as a general-purpose technology that has numerous applications, including language translation, image recognition, decision-making, credit scoring and e-commerce. AI includes the development of machines which can perceive, understand, act and learn a scientific discipline.

Music and artificial intelligence (AI) is the development of music software programs which use AI to generate music. As with applications in other fields, AI in music also simulates mental tasks. A prominent feature is the capability of an AI algorithm to learn based on past data, such as in computer accompaniment technology, wherein the AI is capable of listening to a human performer and performing accompaniment. Artificial intelligence also drives interactive composition technology, wherein a computer composes music in response to a live performance. There are other AI applications in music that cover not only music composition, production, and performance but also how music is marketed and consumed. Several music player programs have also been developed to use voice recognition and natural language processing technology for music voice control. Current research includes the application of AI in music composition, performance, theory and digital sound processing.

OpenAI is an American artificial intelligence (AI) research organization founded in December 2015 and headquartered in San Francisco, California. Its mission is to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". As a leading organization in the ongoing AI boom, OpenAI is known for the GPT family of large language models, the DALL-E series of text-to-image models, and a text-to-video model named Sora. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI.

A military artificial intelligence arms race is an arms race between two or more states to develop and deploy lethal autonomous weapons systems (LAWS). Since the mid-2010s, many analysts have noted the emergence of such an arms race between superpowers for better military AI, driven by increasing geopolitical and military tensions.

The artificial intelligenceindustry in China is a rapidly developing multi-billion dollar industry. The roots of China's AI development started in the late 1970s following Deng Xiaoping's economic reforms emphasizing science and technology as the country's primary productive force.

<span class="mw-page-title-main">Terah Lyons</span> American technology policy scholar

Terah Lyons is known for her work in the field of artificial intelligence and technology policy. Lyons was the executive director of the Partnership on AI and was a policy advisor to the United States Chief Technology Officer Megan Smith in President Barack Obama's Office of Science and Technology Policy.

Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning. For the subset of AI algorithms, the term regulation of artificial intelligence is used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally, including in the European Union. Regulation of AI is considered necessary to both encourage AI and manage associated risks, but challenging. Another emerging topic is the regulation of blockchain algorithms and is mentioned along with regulation of AI algorithms. Many countries have enacted regulations of high frequency trades, which is shifting due to technological progress into the realm of AI algorithms.

Regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI). It is part of the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct enforcement power like the IEEE or the OECD.

<span class="mw-page-title-main">Meta AI</span> Artificial intelligence division of Meta Platforms

Meta AI is an American company owned by Meta that develops artificial intelligence and augmented and artificial reality technologies. Meta AI deems itself an academic research laboratory, focused on generating knowledge for the AI community, and should not be confused with Meta's Applied Machine Learning (AML) team, which focuses on the practical applications of its products.

AI safety is an interdisciplinary field focused on preventing accidents, misuse, or other harmful consequences arising from artificial intelligence (AI) systems. It encompasses machine ethics and AI alignment, which aim to ensure AI systems are moral and beneficial, as well as monitoring AI systems for risks and enhancing their reliability. The field is particularly concerned with existential risks posed by advanced AI models.

<span class="mw-page-title-main">Generative artificial intelligence</span> AI system capable of generating content in response to prompts

Generative artificial intelligence is artificial intelligence capable of generating text, images, videos, or other data using generative models, often in response to prompts. Generative AI models learn the patterns and structure of their input training data and then generate new data that has similar characteristics.

<span class="mw-page-title-main">AI boom</span> Ongoing period of rapid progress in artificial intelligence

The AI boom, or AI spring, is an ongoing period of rapid progress in the field of artificial intelligence (AI) that started in the late 2010s before gaining international prominence in the early 2020s. Examples include protein folding prediction led by Google DeepMind and generative AI applications developed by OpenAI.

The Special Competitive Studies Project (SCSP) is a non-partisan U.S. think tank and private foundation focused on technology and security. Founded by former Google CEO Eric Schmidt in October 2021, SCSP's stated mission is to "make recommendations to strengthen America’s long-term competitiveness as artificial intelligence (AI) and other emerging technologies are reshaping our national security, economy, and society." It seeks to ensure that "America is positioned and organized to win the techno-economic competition between now and 2030."

Open-source artificial intelligence is the application of open-source practices to the development of artificial intelligence resources.

<span class="mw-page-title-main">Aleph Alpha</span> Artificial intelligence company

Aleph Alpha is a German artificial intelligence (AI) startup company founded by professionals with experience as employees of Apple, SAP and Deloitte, based in Heidelberg. Aleph Alpha attempts to provide a full-stack sovereign technology stack for generative AI, independent from US companies and comply with European data protection regulations and the AI Act. It built one of the most powerful AI clusters inside its own data centre and develops large language models (LLM), which try to provide transparency of its sources used for the results generated and are intended for enterprises and governmental agencies only. Training of its model has been done in five European languages.

Executive Order 14110, titled Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence is the 126th executive order signed by U.S. President Joe Biden. Signed on October 30, 2023, the order defines the administration's policy goals regarding artificial intelligence (AI), and orders executive agencies to take actions pursuant to these goals. The order is considered to be the most comprehensive piece of governance by the United States regarding AI.

Anna B. Puglisi is an American security analyst currently serving as biotechnology program director and a senior fellow at Georgetown University's Center for Security and Emerging Technology (CSET). She is also a member of the Center for a New American Security's BioTech Task Force. She was the U.S. National Counterintelligence Officer for East Asia between 2019 and 2020 in the National Counterintelligence and Security Center (NCSC).

Artificial intelligence or Ai is a broad “skewer” term that has specific areas of study clustered next to it, including machine learning, natural language processing, the philosophy of artificial intelligence, autonomous robots and TESCREAL. Research about AI in higher education is widespread in the global north, where there is much hype from venture capital, big tech and some open educationalists. Some believe that Ai will remove the obstacle of "access to expertise”. Others claim that education will be revolutionized with machines and their ability to understand natural language. While others are exploring how LLM’s “reasoning” might be improved. There is at present, no scientific consensus on what Ai is or how to classify and sub-categorize AI This has not hampered the growth of Ai systems which offer scholars and students automatic assessment and feedback, predictions, instant machine translations, on-demand proof-reading and copy editing, intelligent tutoring or virtual assistants. Ai brings conversational coherence to the classroom, and automates the production of content.Using categorisation, summaries and dialogue, Ai "intelligence" or "authority" is reinforced through anthropomorphism and the Eliza effect. Ai also introduces hazards, such as skewed scientific knowledge dissemination, harmful educational practices, or might discourage researchers from publishing their original research openly. Worries about AI safety risks such as privacy breaches, algorithmic biases, security concerns, ethics, and compliance barriers are accompanied by other doomsday warnings.

References

  1. "About Us". Center for Security and Emerging Technology. January 2019. Retrieved June 30, 2019.
  2. "Georgetown launches new $55 million center on security & emerging technology". Institute for Technology, Law and Policy. February 28, 2019. Archived from the original on June 30, 2019. Retrieved June 30, 2019.
  3. "Compete". Center for Security and Emerging Technology. Retrieved 2024-09-03.
  4. "Applications". Center for Security and Emerging Technology. Retrieved 2024-09-03.
  5. "Workforce". Center for Security and Emerging Technology. Retrieved 2024-09-03.
  6. "Assessment". Center for Security and Emerging Technology. Retrieved 2024-09-03.
  7. "Bio-Risk". Center for Security and Emerging Technology. Retrieved 2024-09-03.
  8. "CyberAI". Center for Security and Emerging Technology. Retrieved 2024-09-03.
  9. Anderson, Nick (February 28, 2019). "Georgetown launches think tank on security and emerging technology". The Washington Post . Retrieved June 30, 2019.
  10. "Dewey Murdick". Center for Security and Emerging Technology. Retrieved 2023-06-08.
  11. "Georgetown University — Center for Security and Emerging Technology". Open Philanthropy Project. January 2019. Retrieved June 30, 2019.
  12. "Hewlett Foundation". October 8, 2019. Retrieved December 8, 2019.
  13. Bordelon, Brendan (October 13, 2023). "How a billionaire-backed network of AI advisers took over Washington". Politico.
  14. "Newsletters". Center for Security and Emerging Technology. Retrieved 2024-09-03.
  15. "U.S. AI Workforce". Center for Security and Emerging Technology. April 2021. Retrieved 2024-09-03.
  16. "Immigration Policy and the U.S. AI Sector" (PDF). Center for Security and Emerging Technology. September 2019.
  17. "China's Access to Foreign AI Technology" (PDF). Center for Security and Emerging Technology. September 2019.
  18. "Georgetown University". September 2019. Retrieved December 8, 2019.
  19. "Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations" (PDF). cdn.openai.com. January 2023. arXiv: 2301.04246 .
  20. Vigdor, Dan. "Council Post: How Could Artificial Intelligence Impact Cybersecurity?". Forbes. Retrieved 2023-09-06.
  21. "China limits overseas access to data". Taipei Times. Bloomberg. 2023-05-09. Retrieved 2023-09-06.