Roman Yampolskiy | |
---|---|
Роман Ямпольский | |
![]() Yampolskiy in 2023 | |
Born | Roman Vladimirovich Yampolskiy Riga, Latvian SSR, Soviet Union |
Alma mater | University at Buffalo |
Scientific career | |
Fields | Computer science |
Institutions |
Roman V. Yampolskiy (Russian: Роман Владимирович Ямпольский) is a computer scientist at the University of Louisville, mostly known for his work on AI safety and cybersecurity. He is the founder and as of 2012 [update] director of Cyber Security Lab, in the department of Computer Engineering and Computer Science at the Speed School of Engineering of the University of Louisville.
Yampolskiy was born in Riga, Latvia. [1] He received a PhD from the University at Buffalo in 2008. [2]
Yampolskiy is the founder and as of 2012 [update] director of Cyber Security Lab, in the department of Computer Engineering and Computer Science at the Speed School of Engineering of the University of Louisville. [3]
Yampolskyi is considered to have coined the term "AI safety" in a 2011 publication, and is an early researcher in the field. [4] [5]
Yampolskiy has warned of the possibility of existential risk from advanced artificial intelligence, and has advocated research into "boxing" artificial intelligence. [6] More broadly, Yampolskiy and his collaborator, Michaël Trazzi, have proposed in 2018 to introduce "Achilles' heels" into potentially dangerous AI, for example by barring an AI from accessing and modifying its own source code. [7] [8] Another proposal is to apply a "security mindset" to AI safety, itemizing potential outcomes in order to better evaluate proposed safety mechanisms. [9]
He has suggested that there is no evidence of a solution to the AI control problem and has proposed pausing AI development, arguing that "Imagining humans can control superintelligent AI is a little like imagining that an ant can control the outcome of an NFL football game being played around it". [10] [11] He joined AI researchers such as Yoshua Bengio and Stuart Russell in signing "Pause Giant AI Experiments: An Open Letter". [12]
In an appearance on the Lex Fridman podcast in 2024, Yampolskiy predicted the chance that AI could lead to human extinction at "99.9% within the next hundred years". [13] In 2025, he warned that AI could leave 99% of workers unemployed by 2030. [5] [14]
Yampolskiy has been a research advisor of the Machine Intelligence Research Institute, and an AI safety fellow of the Foresight Institute. [15]
In 2015, Yampolskiy launched "intellectology", a new field of study founded to analyze the forms and limits of intelligence. [16] [17] [18] Yampolskiy considers AI to be a sub-field of this. [16] An example of Yampolskiy's intellectology work is an attempt to determine the relation between various types of minds and the accessible fun space, i.e. the space of non-boring activities. [19]
Yampolskiy has worked on developing the theory of AI-completeness, suggesting the Turing Test as a defining example. [20]
{{cite web}}
: CS1 maint: bot: original URL status unknown (link)