Nate Soares

Last updated

Nate Soares (born Nathaniel Soares) is an American artificial intelligence researcher known for his work on existential risk from AI. [1] [2] In 2014, Soares co-authored a paper that introduced the term AI alignment , the challenge of making increasingly capable AIs behave as intended. [3] [4] Soares is the president of the Machine Intelligence Research Institute (MIRI), [5] a research nonprofit based in Berkeley, California.

Contents

In a 2025 book authored with Eliezer Yudkowsky, If Anyone Builds It, Everyone Dies , Soares argues that the creation of vastly smarter-than-human AI (or superintelligence) “using anything remotely like current techniques” would very likely result in human extinction. [6] [7] [8] Soares argues that the field of AI alignment is in a nascent state, and that international regulatory intervention is likely necessary to prevent AI developers from racing to build catastrophically dangerous systems. [9]

Life

Soares received his Bachelor of Science degree (in computer science and economics) from George Washington University in 2011. [10] Soares worked as a research associate at the National Institute of Standards and Technology and as a contractor for the United States Department of Defense, creating software tools for the National Defense University, before spending time at Microsoft and Google. [11] [12]

Soares left Google in 2014 to become a research fellow at the Machine Intelligence Research Institute. [13] He served as the lead author on MIRI's research agenda, which in January 2015 was cited heavily in Research Priorities for Robust and Beneficial Artificial Intelligence, an open letter calling for AI scientists to prioritize technical research “not only on making AI more capable, but also on maximizing the societal benefit of AI.” [14] [15] This included “research on the possibility of superintelligent machines or rapid, sustained self-improvement (intelligence explosion).” [14]

Shortly after joining MIRI, Soares became the institute's executive director. [13] [16] In 2017, he gave a talk at Google outlining open research problems in AI alignment, and arguing that the alignment problem looks especially difficult. [17]

In 2023, MIRI shifted from a focus on alignment research to a focus on warning policymakers and the public about the risks posed by potential future developments in AI. [18] Coinciding with this change, Soares transitioned from the role of executive director to president, with Malo Bourgon serving as MIRI's new CEO. [18] [19]

Publications

References

  1. Spirlet, Thibault (2025-09-27). "Superintelligence could wipe us out if we rush into it — but humanity can still pull back, a top AI safety expert says". Business Insider . Archived from the original on January 15, 2026. Retrieved 2026-01-22.
  2. Waters, Richard (2014-10-31). "Artificial intelligence: machine v man". Financial Times . Archived from the original on March 6, 2025. Retrieved 2026-01-22.
  3. Yudkowsky, Eliezer; Soares, Nate. "Shutdown Buttons and Corrigibility". If Anyone Builds It, Everyone Dies Online Resources. Archived from the original on December 6, 2025. Retrieved January 21, 2026. We named this broad research problem 'corrigibility,' in the 2014 paper that also introduced the term 'AI alignment problem' (which had previously been called the 'friendly AI problem' by us and the 'control problem' by others).
  4. Soares, Nate; Fallenstein, Benja; Yudkowsky, Eliezer; Armstrong, Stuart (2015). "Corrigibility" (PDF). AAAI Workshops: Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence, Austin, TX, January 25–26, 2015. AAAI Publications. pp. 74–82. Archived from the original (PDF) on January 1, 2026. Retrieved January 22, 2026.
  5. Whipple, Tom (2025-09-12). "AI — it's going to kill us all". The Times . Archived from the original on November 22, 2025. Retrieved 2026-01-22.
  6. Canfield, Kevin (2025-09-02). "'Everyone, everywhere on Earth, will die': Why 2 new books on AI foretell doom". San Francisco Chronicle . Archived from the original on December 29, 2025. Retrieved 2026-01-22.
  7. Louallen, Doc (2025-09-02). "New book claims superintelligent AI development is racing toward global catastrophe". ABC News . Archived from the original on January 6, 2026. Retrieved 2026-01-22.
  8. Roose, Kevin (2025-09-19). "A.I.'s Prophet of Doom Wants to Shut It All Down". The New York Times . Archived from the original on September 12, 2025. Retrieved 2026-01-22.
  9. Bordelon, Brendan (2025-10-01). "Why DC should be way more worried about AI". Politico . Retrieved 2026-01-22. In his view, companies like OpenAI and Anthropic are racing toward the profoundly dangerous goal of artificial superintelligence — an AI that far outstrips the capabilities of any one human, or even all humans combined. Once that comes into existence, Soares argues, it will be impossible to control[....] Soares sketched out a policy vision to stave off humanity's demise. His ask is in one sense simple: a global ban on advanced AI research.
  10. "News of Area Students". White River Valley Herald . 2011-05-19. Retrieved 2026-01-26.
  11. Soares, Nate. "Nathaniel Soares" (PDF). Résumé. Archived (PDF) from the original on 14 September 2024. Retrieved 2026-01-26.
  12. "If Anyone Builds It, Everyone Dies: How Artificial Superintelligence Might Wipe Out Our Entire Species". The Great Simplification (Podcast). 3 December 2025. Retrieved 27 January 2026.
  13. 1 2 O'Connell, Mark (2017-02-28). "If Silicon Valley Types Are Scared of A.I., Should We Be?". Slate . Archived from the original on March 5, 2019. Retrieved 2026-01-27.
  14. 1 2 Russell, Stuart; Dewey, Daniel; Tegmark, Max (2015). "Research Priorities for Robust and Beneficial Artificial Intelligence" (PDF). AI Magazine. 36 (4). Association for the Advancement of Artificial Intelligence: 105–114. doi:10.5840/jphil2020117516.
  15. Soares, Nate; Fallenstein, Benya (2017). "Agent Foundations for Aligning Machine Intelligence with Human Interests: A Technical Research Agenda" (PDF). In Callaghan, Victor; Miller, James; Yampolskiy, Roman; Armstrong, Stuart (eds.). The Technological Singularity. Springer. ISBN   9783662540312.
  16. Gallagher, Brian (2018-03-12). "Scary AI Is More "Fantasia" Than "Terminator"". Nautilus . Archived from the original on December 14, 2025. Retrieved 2026-01-28.
  17. Chivers, Tom (2018-07-16). "How Disney shows an AI apocalypse is possible". UnHerd . Archived from the original on January 3, 2026. Retrieved 2026-01-28.
  18. 1 2 Duleba, Gretta (2023-10-10). "Announcing MIRI's new CEO and leadership team". MIRI Updates. Machine Intelligence Research Institute. Archived from the original on January 4, 2025. Retrieved 2026-01-28.
  19. Gold, Ashley; Curi, Maria (2023-12-05). "Senate AI forums wrap for 2023 this week". Axios . Retrieved 2026-01-28.