Jaan Tallinn | |
---|---|
Born | [1] Tallinn, Estonia | 14 February 1972
Education | University of Tartu (BSc) |
Occupation(s) | programmer, investor, philanthropist |
Known for | Kazaa Skype Existential risk |
Jaan Tallinn (born 14 February 1972) is an Estonian billionaire computer programmer and investor [2] [3] known for his participation in the development of Skype and file-sharing application FastTrack/Kazaa. [4]
Recognized as a prominent figure in the field of artificial intelligence, Tallinn is a leading investor and advocate for AI safety.
He was a Series A investor and board member at DeepMind (later acquired by Google) alongside Elon Musk, Peter Thiel and other early supporters. [5] Tallinn also led the Series A funding round for Anthropic, an AI safety-focused company where he is now a board observer. [6]
Tallinn is a leading figure in the field of existential risk, having co-founded both the Centre for the Study of Existential Risk (CSER) at the University of Cambridge, in the United Kingdom [7] [8] and the Future of Life Institute in Cambridge, Massachusetts, in the United States. [9] [10] [11] [12]
Tallinn graduated from the University of Tartu in Estonia in 1996 with a BSc in theoretical physics with a thesis that considered travelling interstellar distances using warps in spacetime.
Tallinn founded Bluemoon in Estonia alongside schoolmates Ahti Heinla and Priit Kasesalu. Bluemoon's Kosmonaut became, in 1989 (SkyRoads is the 1993 remake), the first Estonian game to be sold abroad, and earned the company US$5,000 (~$12,290 in 2023). By 1999, Bluemoon faced bankruptcy; its founders decided to acquire remote jobs for the Swedish Tele2 at a salary of US$330 (~$604.00 in 2023) each per day. The Tele2 project, "Everyday.com", was a commercial flop. Subsequently, while working as a stay-at-home father, Tallinn developed FastTrack and Kazaa for Niklas Zennström and Janus Friis (formerly of Tele2). Kazaa's P2P technology was later repurposed to drive Skype around 2003. Tallinn sold his shares in Skype in 2005, when it was purchased by eBay. [13] [8]
In 2014, he invested in the reversible debugging software for app development Undo. [14] He also made an early investment in DeepMind which was purchased by Google in 2014 for $600 million (~$761 million in 2023). [15] Other investments include Faculty, a British AI startup focused on tracking terrorists, [16] and Pactum, an "autonomous negotiation" startup based in California and Estonia. [17]
According to sources cited by the Wall Street Journal , Tallinn loaned Sam Bankman-Fried about $100 million (~$120 million in 2023), and had recalled the loan by 2018. [18]
As of 2019, Tallinn is married and has six children. [8]
Tallinn is a participant and donator to the effective altruism movement. [22] [23] He donated over a million dollars to the Machine Intelligence Research Institute since 2015. [24] His initial donation when co-founding the Centre for the Study of Existential Risk in 2012 was around $200,000 (~$262,438 in 2023). [8]
Tallinn strongly promotes the study of existential risk and has given numerous talks on this topic. [25] His main worries are related to artificial intelligence, unknowns coming from technological development, synthetic biology and nanotechnology. [26] [27] He believes humanity is not spending enough resources on long-term planning and mitigating threats that could wipe us out as a species. [28] He has been a supporter of the Rationalist movement. [29] He has also contributed to Chatham House, supporting their work on the nuclear threat.
His views on the AI alignment problem have been influenced by the writings of Eliezer Yudkowsky. Tallinn recalls that "the overall idea that caught my attention that I never had thought about was that we are seeing the end of an era during which the human brain has been the main shaper of the future". [30] He says he's yet to meet anyone working at AI labs who thinks the risk of training the next-generation model "blowing up the planet" is less than 1%. [31]
When employees of OpenAI left to form Anthropic, primarily out of concerns that OpenAI was not focused enough on AI safety, Tallinn invested in the new company. However, he was unsure if he had made the right decision, arguing that "on the one hand, it’s great to have this safety-focused thing. On the other hand, this is proliferation". Tallinn praised Anthropic for having a greater safety focus than other AI companies, but said "that doesn’t change the fact that they’re dealing with dangerous stuff and I’m not sure if they should be. I’m not sure if anyone should be”. [32]
In March 2023, Tallinn signed an open letter from the Future of Life Institute calling for "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4", [33] [34] and in May, he signed a statement from the Center for AI Safety which read "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war". [35] [36]
Kazaa Media Desktop. was a peer-to-peer file sharing application using the FastTrack protocol licensed by Joltid Ltd. and operated as Kazaa by Sharman Networks. Kazaa was subsequently under license as a legal music subscription service by Atrinsic, Inc., which lasted until August 2012.
Vinod Khosla is an Indian-American billionaire businessman and venture capitalist. He is a co-founder of Sun Microsystems and the founder of Khosla Ventures. Khosla made his wealth from early venture capital investments in areas such as networking, software, and alternative energy technologies. He is considered one of the most successful and influential venture capitalists. Khosla was named the top venture capitalist on the Forbes Midas List in 2001 and has been listed multiple times since that time. As of August 2024, Forbes estimated his net worth at US$7.2 billion.
Nick Bostrom is a philosopher known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. He was the founding director of the now dissolved Future of Humanity Institute at the University of Oxford and is now Principal Researcher at the Macrostrategy Research Initiative.
An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI (ASI), and the notion of a robot uprising. Stories of AI takeovers have been popular throughout science fiction, but recent advancements have made the threat more real. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.
Anthropic PBC is a U.S.-based artificial intelligence (AI) public-benefit startup founded in 2021. It researches and develops AI to "study their safety properties at the technological frontier" and use this research to deploy safe, reliable models for the public. Anthropic has developed a family of large language models (LLMs) named Claude as a competitor to OpenAI's ChatGPT and Google's Gemini.
The Future of Humanity Institute (FHI) was an interdisciplinary research centre at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and the Oxford Martin School. Its director was philosopher Nick Bostrom, and its research staff included futurist Anders Sandberg and Giving What We Can founder Toby Ord.
A global catastrophic risk or a doomsday scenario is a hypothetical event that could damage human well-being on a global scale, even endangering or destroying modern civilization. An event that could cause human extinction or permanently and drastically curtail humanity's existence or potential is known as an "existential risk".
Samuel Harris Altman is an American entrepreneur and investor best known as the chief executive officer of OpenAI since 2019. He is also the Chairman of clean energy companies Oklo Inc. and Helion Energy. Altman is considered to be one of the leading figures of the AI boom. He dropped out of Stanford University after two years and founded Loopt, a mobile social networking service, raising more than $30 million in venture capital. In 2011, Altman joined Y Combinator, a startup accelerator, and was its president from 2014 to 2019.
The Centre for the Study of Existential Risk (CSER) is a research centre at the University of Cambridge, intended to study possible extinction-level threats posed by present or future technology. The co-founders of the centre are Huw Price, Martin Rees and Jaan Tallinn.
MetaMed Research was an American medical consulting firm aiming to provide personalized medical research services. It was founded in 2012 by Michael Vassar, Jaan Tallinn, Zvi Mowshowitz, and Nevin Freeman with startup funding from Silicon Valley investor Peter Thiel. MetaMed stated that its researchers were drawn from top universities, as well as prominent technology companies such as Google. Many of its principals were associated with the Rationalist movement.
Priit Kasesalu is an Estonian programmer and software developer best known for his participation in the development of Kazaa, Skype and, most recently, Joost. He currently works for Ambient Sound Investments and lives in Tallinn, Estonia.
The Future of Life Institute (FLI) is a nonprofit organization which aims to steer transformative technology towards benefiting life and away from large-scale risks, with a focus on existential risk from advanced artificial intelligence (AI). FLI's work includes grantmaking, educational outreach, and advocacy within the United Nations, United States government, and European Union institutions.
Superintelligence: Paths, Dangers, Strategies is a 2014 book by the philosopher Nick Bostrom. It explores how superintelligence could be created and what its features and motivations might be. It argues that superintelligence, if created, would be difficult to control, and that it could take over the world in order to accomplish its goals. The book also presents strategies to help make superintelligences whose goals benefit humanity. It was particularly influential for raising concerns about existential risk from artificial intelligence.
In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential "pitfalls": artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable. The four-paragraph letter, titled "Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter", lays out detailed research priorities in an accompanying twelve-page document.
Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.
OpenAI is an American artificial intelligence (AI) research organization founded in December 2015 and headquartered in San Francisco, California. Its stated mission is to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". As a leading organization in the ongoing AI boom, OpenAI is known for the GPT family of large language models, the DALL-E series of text-to-image models, and a text-to-video model named Sora. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI.
Effective Altruism Global, abbreviated EA Global or EAG, is a series of annual philanthropy events that focus on the effective altruism movement. They are organised by the Centre for Effective Altruism. Huffington Post editor Nico Pitney described one event as a gathering of "nerd altruists", which was "heavy on people from technology, science, and analytical disciplines".
Pause Giant AI Experiments: An Open Letter is the title of a letter published by the Future of Life Institute in March 2023. The letter calls "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4", citing risks such as AI-generated propaganda, extreme automation of jobs, human obsolescence, and a society-wide loss of control. It received more than 30,000 signatures, including academic AI researchers and industry CEOs such as Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak and Yuval Noah Harari.
P(doom) is a term in AI safety that refers to the probability of catastrophic outcomes (or "doom") as a result of artificial intelligence. The exact outcomes in question differ from one prediction to another, but generally allude to the existential risk from artificial general intelligence.
The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, or SB 1047, is a 2024 California bill intended to "mitigate the risk of catastrophic harms from AI models so advanced that they are not yet known to exist". Specifically, the bill would apply to models which cost more than $100 million to train and were trained using a quantity of computing power greater than 1026 integer or floating-point operations. SB 1047 would apply to all AI companies doing business in California—the location of the company does not matter. The bill creates protections for whistleblowers and requires developers to perform risk assessments of their models prior to release, under the supervision of the Government Operations Agency. It would also establish CalCompute, a University of California public cloud computing cluster for startups, researchers and community groups.
{{cite news}}
: CS1 maint: multiple names: authors list (link)Tallinn learned the importance of feedback loops himself the hard way, after seeing the demise of one of his startups, medical consulting firm Metamed.