Leopold Aschenbrenner | |
---|---|
Born | 2001or2002(age 23–24) Germany |
Education | John F. Kennedy School Columbia University |
Occupation(s) | AI researcher Investor |
Employer | OpenAI (2023 to 2024) |
Notable work | Situational Awareness |
Leopold Aschenbrenner (born 2001or2002 [1] ) is a German artificial intelligence (AI) researcher and investor. He was part of OpenAI's "Superalignment" team before he was fired in April 2024 over an alleged information leak. He has published a popular essay called "Situational Awareness" about the emergence of artificial general intelligence and related security risks. [2]
Aschenbrenner was born in Germany. He was educated at the John F. Kennedy School in Berlin and graduated as valedictorian from Columbia University in 2021 at age of 19. [1] [3] He did research for the Global Priorities Initiative at Oxford University and co-authored a 2024 working paper with Philip Trammell of Oxford. He also has some experience with the effective altruism movement. [4] As part of the movement, Aschenbrenner was a member of the FTX Future Fund team, a philanthropic initiative created by the FTX Foundation, from February 2022 until FTX's bankruptcy in November of that year. [5] [6]
Aschenbrenner joined OpenAI in 2023, on a project called "Superalignment" that researches how potential future superintelligences could be aligned with human values. [7]
In April 2023, a hacker gained access to OpenAI's internal messaging system and stole information, an event that OpenAI kept private. [8] Subsequently, Aschenbrenner wrote a memo to OpenAI's board of directors about the possibility of industrial espionage by Chinese and other foreign entities, arguing that OpenAI's security was insufficient. According to Aschenbrenner, this memo led to tensions between the board and the leadership about security, and he received a warning from human resources. OpenAI later fired him in April 2024 over an alleged information leak, which Aschenbrenner said was about a benign brainstorming document shared to three external researchers for feedback. OpenAI stated that the firing is unrelated to the security memo, whereas Aschenbrenner said that it was made explicit to him at the time that it was a major reason. [9] [10] The "Superalignment" team was dissolved one month later, with the departure from OpenAI of other researchers such as Ilya Sutskever and Jan Leike. [11]
Aschenbrenner said that he started an investment firm with investors Patrick and John Collison, Daniel Gross, and Nat Friedman. [12] [13] Named after his essay Situational Awareness, the AI-focused hedge fund manages over $1.5 billion as of 2025 [update] . [14]
In 2024, Aschenbrenner wrote a 165-page essay named "Situational Awareness: The Decade Ahead". It contains sections that predict the emergence of AGI, imagines a path from AGI to superintelligence, describes four risks to humanity, outlines a way for humans to deal with superintelligent machines, and articulates the principles of an "AGI realism". He specifically warns that the United States needs to defend against the use of AI technologies by countries such as Russia and China. [13] His analysis is based on future capacity for AI systems to conduct AI research, what a Forbes writer referred to as "recursive self-improvement and runaway superintelligence." [15]