Adam H. Russell | |
---|---|
Alma mater | Duke University University of Oxford |
Scientific career | |
Fields | Applied anthropology, program management |
Institutions | Intelligence Advanced Research Projects Activity DARPA University of Maryland Advanced Research Projects Agency for Health University of Southern California Information Sciences Institute NIST/US AI Safety Institute |
Adam H. Russell is an American anthropologist who serves as Chief Vision Officer of the U.S. AI Safety Institute. [1] He previously served as the acting deputy director of the Advanced Research Projects Agency for Health.
Russell completed a Bachelor of Arts in Cultural Anthropology from Duke University, and an M.Phil. and a D.Phil. in Social Anthropology from University of Oxford, where he was a Rhodes Scholar. [2] He played with the Oxford University RFC for four varsity matches and represented the United States on the US Men's National Team, becoming Eagle #368. He also worked with the United States national rugby union team, and worked as High Performance director for the United States women's national rugby union team in the 2014 and 2017 Women's Rugby World Cups. [3]
Russell began in industry, where he was a senior scientist and principal investigator on a wide range of human performance and social science research projects and provided strategic assessments for a number of different government organizations. [2] [4] In 2009, Russell joined the Intelligence Advanced Research Projects Activity (IARPA) as a program manager. [2] [4] He developed and managed a number of high-risk, high-payoff research projects for the Office of the Director of National Intelligence. [2] Russell joined DARPA as a program manager in July 2015. [2] [4] His work there focused on advancing capabilities for understanding and tackling problems in the Human Domain, [5] including the creation of new experimental platforms and tools to facilitate discovery, quantification and "big validation" of fundamental measures, research, and tools in social science, behavioral science and human performance. [2] His term at DARPA ended in 2020, when he left to become the Chief Scientist for the Applied Research Laboratory for Intelligence and Security [6] at the University of Maryland. [2] [3]
In 2022, while still at ARLIS, HHS Secretary Xavier Becerra selected Russell to serve as the acting deputy director for the Advanced Research Projects Agency for Health (ARPA-H). In that role, Russell helped lead the process to stand up ARPA-H before the inaugural director was selected and on-boarded. [7]
In 2023, Russell became the Director of USC's Information Sciences Institute’s Artificial Intelligence Division, where he currently works. [8] He also hosts USC ISI's "AI/nsiders podcast" [9] - which he launched on the premise that better understanding AI might also mean better understanding the humans working with, on, and around it.
As of April 2024, he is also serving in an IPA capacity as the Chief Vision Officer for NIST's AI Safety Institute, [10] where he is supporting the stand up of the USAISI [11] and its organizational vision, mission, strategy, and design in order to - in his words - help ensure that AI Safety leads to "the best of all possible worlds."
Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.
The USC Information Sciences Institute (ISI) is a component of the University of Southern California (USC) Viterbi School of Engineering, and specializes in research and development in information processing, computing, and communications technologies. It is located in Marina del Rey, California.
The National Institutes of Health, commonly referred to as NIH, is the primary agency of the United States government responsible for biomedical and public health research. It was founded in the late 1880s and is now part of the United States Department of Health and Human Services. Many NIH facilities are located in Bethesda, Maryland, and other nearby suburbs of the Washington metropolitan area, with other primary facilities in the Research Triangle Park in North Carolina and smaller satellite facilities located around the United States. The NIH conducts its own scientific research through the NIH Intramural Research Program (IRP) and provides major biomedical research funding to non-NIH research facilities through its Extramural Research Program.
Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.
Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human capabilities across a wide range of cognitive tasks. This is in contrast to narrow AI, which is designed for specific tasks. AGI is considered one of various definitions of strong AI.
An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Stories of AI takeovers remain popular throughout science fiction, but recent advancements have made the threat more real. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI (ASI), and the notion of a robot uprising. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.
The National Institute of Environmental Health Sciences (NIEHS) conducts research into the effects of the environment on human disease, as one of the 27 institutes and centers of the National Institutes of Health (NIH). It is located in the Research Triangle Park in North Carolina, and is the only primary division of the NIH located outside of the Washington metropolitan area.
Sankar Kumar Pal is a computer scientist and the president of the Indian Statistical Institute (ISI), Kolkata. He is also a National Science Chair, Government of India. Pal is a computer scientist with an international reputation on pattern recognition, image processing, fuzzy neural network, rough fuzzy hybridization, soft computing, granular mining, and machine intelligence. He pioneered the development of fuzzy set theory, and neuro-fuzzy and rough-fuzzy computing for uncertainty modelling with demonstration in pattern recognition, image processing, machine learning, knowledge-based systems and data mining. Pal is widely recognized across the world for his pioneering and extraordinary contributions in Machine Intelligence, Fuzzy Logic, Soft Computing and Pattern Recognition. This has made India a leader in these disciplines in international scenario. He founded the Machine Intelligence Unit in 1993, and the Center for Soft Computing Research: A National Facility in 2004, both at the ISI. In the process he has created many renowned scientists out of his doctoral students.
The following outline is provided as an overview of and topical guide to artificial intelligence:
John Yen is Professor of Data Science and Professor-in-Charge of Data Science in the College of Information Sciences and Technology at Pennsylvania State University. He currently leads the Laboratory of AI for Cyber Security at Penn State. He was the founder and a former director of the Cancer Informatics Initiative there.
Roman Vladimirovich Yampolskiy is a Latvian computer scientist at the University of Louisville, known for his work on behavioral biometrics, security of cyberworlds, and AI safety. He holds a PhD from the University at Buffalo (2008). He is currently the director of Cyber Security Laboratory in the department of Computer Engineering and Computer Science at the Speed School of Engineering.
In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential "pitfalls": artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable. The four-paragraph letter, titled "Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter", lays out detailed research priorities in an accompanying twelve-page document.
Existential risk from artificial general intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.
Shrikanth Narayanan is an Indian-American Professor at the University of Southern California. He is an interdisciplinary engineer–scientist with a focus on human-centered signal processing and machine intelligence with speech and spoken language processing at its core. A prolific award-winning researcher, educator, and inventor, with hundreds of publications and a number of acclaimed patents to his credit, he has pioneered several research areas including in computational speech science, speech and human language technologies, audio, music and multimedia engineering, human sensing and imaging technologies, emotions research and affective computing, behavioral signal processing, and computational media intelligence. His technical contributions cover a range of applications including in defense, security, health, education, media, and the arts. His contributions continue to impact numerous domains including in human health, national defense/intelligence, and the media arts including in using technologies that facilitate awareness and support of diversity and inclusion. His award-winning patents have contributed to the proliferation of speech technologies on the cloud and on mobile devices and in enabling novel emotion-aware artificial intelligence technologies.
In the field of artificial intelligence (AI), AI alignment research aims to steer AI systems toward a person's or group's intended goals, preferences, and ethical principles. An AI system is considered aligned if it advances its intended objectives. A misaligned AI system may pursue some objectives, but not the intended ones.
The Center for Human-Compatible Artificial Intelligence (CHAI) is a research center at the University of California, Berkeley focusing on advanced artificial intelligence (AI) safety methods. The center was founded in 2016 by a group of academics led by Berkeley computer science professor and AI expert Stuart J. Russell. Russell is known for co-authoring the widely used AI textbook Artificial Intelligence: A Modern Approach.
Consolidated Appropriations Act, 2022 is a $1.5 trillion omnibus spending bill passed by the 117th United States Congress on March 14, 2022 and signed into law by President Joe Biden the following day.
The Advanced Research Projects Agency for Health (ARPA-H) is an agency within the Department of Health and Human Services. Its mission is to "make pivotal investments in break-through technologies and broadly applicable platforms, capabilities, resources, and solutions that have the potential to transform important areas of medicine and health for the benefit of all patients and that cannot readily be accomplished through traditional research or commercial activity."
Alexander J. Titus is an American AI and biotechnology expert, strategist, and entrepreneur notable for significant contributions to biotechnology, artificial intelligence (AI), and national security. His expertise in integrating AI with biotechnology and security has distinguished his career in both the public and private sectors.
Elham Tabassi is an engineer and government leader. She was listed on the inaugural TIME100 Most Influential People in AI. Tabassi led the creation of the United States Artificial Intelligence Risk Management Framework, adopted by both industry and government. Tabassi was selected to serve on the National Artificial Intelligence (AI) Research Resource Task Force. Tabassi began her career in government at the National Institute of Standards and Technology, pioneering various machine learning and computer vision projects with applications in biometrics evaluation and standards, included in over twenty five publications. Her research has been deployed by the FBI and Department of Homeland Security.
{{cite web}}
: CS1 maint: multiple names: authors list (link)