Industry | Artificial Intelligence |
---|---|
Founded | June 19, 2024 |
Founders |
|
Headquarters | |
Website | ssi |
Safe Superintelligence Inc. or SSI Inc. is an American artificial intelligence company founded by Ilya Sutskever (OpenAI's former chief scientist), Daniel Gross (former head of Apple AI) and Daniel Levy (investor & AI researcher). The company's mission is to focus on safely developing a superintelligence, a computer-based agent capable of surpassing human intelligence. [1] [2] [3] [4]
On May 15, 2024, Ilya Sutskever left OpenAI, the company he co-founded, after a board dispute where he voted to fire Sam Altman amid concerns over communication and trust. [5] [6] Sutskever and others additionally believed that OpenAI was neglecting its original focus on safety in favor of pursuing opportunities for commercialization. [7] [8]
On June 19, 2024, Sutskever posted on X that he was starting SSI Inc, with the goal to safely develop superintelligent AI, alongside Daniel Levy, and Daniel Gross [9] [10] The company, composed of a small team, is split between Palo Alto, California and Tel Aviv, Israel. [11]
In September 2024, SSI revealed it had raised $1 billion from venture capital firms including SV Angel, DST Global, Sequoia Capital, and Andreessen Horowitz. The money will be used to build up more computing power and hire top individuals in the field. The company is currently estimated to be valued at $5 billion. [11]
Geoffrey Everest Hinton is a British-Canadian computer scientist, cognitive scientist, cognitive psychologist, known for his work on artificial neural networks which earned him the title as the "Godfather of AI".
A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.
An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI (ASI), and the notion of a robot uprising. Stories of AI takeovers have been popular throughout science fiction, but recent advancements have made the threat more real. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.
Jaan Tallinn is an Estonian billionaire computer programmer and investor known for his participation in the development of Skype and file-sharing application FastTrack/Kazaa.
Anthropic PBC is a U.S.-based artificial intelligence (AI) public-benefit startup founded in 2021. It researches and develops AI to "study their safety properties at the technological frontier" and use this research to deploy safe, reliable models for the public. Anthropic has developed a family of large language models (LLMs) named Claude as a competitor to OpenAI's ChatGPT and Google's Gemini.
Samuel Harris Altman is an American entrepreneur and investor best known as the chief executive officer of OpenAI since 2019. He is also the chairman of clean energy companies Oklo Inc. and Helion Energy. Altman is considered to be one of the leading figures of the AI boom. He dropped out of Stanford University after two years and founded Loopt, a mobile social networking service, raising more than $30 million in venture capital. In 2011, Altman joined Y Combinator, a startup accelerator, and was its president from 2014 to 2019.
Bret Steven Taylor is an American computer programmer and entrepreneur. He is most notable for leading the team that co-created Google Maps and his tenures as the CTO of Facebook, as the chairman of Twitter, Inc.'s board of directors prior to its acquisition by Elon Musk, and as the co-CEO of Salesforce. Taylor was additionally one of the founders of FriendFeed and the creator of Quip. Since 2023, he is the founder of Sierra, chairman of OpenAI, and a board member of Shopify.
Emmett Shear is an American Internet entrepreneur and investor. He is the co-founder of live video platform Justin.tv. He was the chief executive officer of Twitch when it was spun off from Justin.tv until March 2023. In 2011, Shear was appointed as a part-time partner at venture capital firm Y Combinator. In November 2023, he briefly was interim CEO of OpenAI.
Daniel Gross is an Israeli-American businessperson who co-founded Cue, led artificial intelligence efforts at Apple, served as a partner at Y Combinator, and is a notable technology investor in companies such as Uber, Instacart, Figma, GitHub, Airtable, Rippling, CoreWeave, Character.ai, Perplexity AI, and others.
Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.
OpenAI is an American artificial intelligence (AI) research organization founded in December 2015 and headquartered in San Francisco, California. Its stated mission is to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". As a leading organization in the ongoing AI boom, OpenAI is known for the GPT family of large language models, the DALL-E series of text-to-image models, and a text-to-video model named Sora. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI.
Ilya Sutskever is a Canadian-Israeli-Russian computer scientist who specializes in machine learning.
Wojciech Zaremba is a Polish computer scientist, a founding team member of OpenAI (2016–present), where he leads both the Codex research and language teams. The teams actively work on AI that writes computer code and creating successors to GPT-3 respectively.
Oriol Vinyals is a Spanish machine learning researcher at DeepMind. He is currently technical lead on Gemini, along with Noam Shazeer and Jeff Dean.
Gregory Brockman is an American entrepreneur, investor and software developer who is a co-founder and currently the president of OpenAI. He began his career at Stripe in 2010, upon leaving MIT, and became their CTO in 2013. He left Stripe in 2015 to co-found OpenAI, where he also assumed the role of CTO.
On May 30, 2023, hundreds of artificial intelligence experts and other notable figures signed the following short Statement on AI Risk:
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
On November 17, 2023, OpenAI's board of directors ousted co-founder and chief executive Sam Altman after the board had no confidence in his leadership. The removal was caused by concerns about his handling of artificial intelligence safety, and allegations of abusive behavior. Altman was reinstated on November 22 after pressure from employees and investors.
Helen Toner is an Australian researcher, and the director of strategy at Georgetown’s Center for Security and Emerging Technology. She was a board member of OpenAI when CEO Sam Altman was fired.
Jan Leike is an AI alignment researcher who has worked at DeepMind and OpenAI. He joined Anthropic in May 2024.
Leopold Aschenbrenner is an artificial intelligence (AI) researcher. He was part of OpenAI's "Superalignment" team, before he was fired in April 2024 over an alleged information leak. He has published a popular essay called "Situational Awareness" about the emergence of artificial general intelligence and related security risks.