Jaan Tallinn

Last updated
Jaan Tallinn
Jaan Tallinn.jpg
Born (1972-02-14) 14 February 1972 (age 52) [1]
Tallinn, Estonia
Education University of Tartu (BSc)
Occupation(s)programmer, investor, philanthropist
Known for Kazaa
Skype
Existential risk

Jaan Tallinn (born 14 February 1972) is an Estonian billionaire computer programmer and investor [2] [3] known for his participation in the development of Skype and file-sharing application FastTrack/Kazaa. [4]

Contents

Recognized as a prominent figure in the field of artificial intelligence, Tallinn is a leading investor and advocate for AI safety.

He was a Series A investor and board member at DeepMind (later acquired by Google) alongside Elon Musk, Peter Thiel and other early supporters. [5] Tallinn also led the Series A funding round for Anthropic, an AI safety-focused company where he is now a board observer. [6]

Tallinn is a leading figure in the field of existential risk, having co-founded both the Centre for the Study of Existential Risk (CSER) at the University of Cambridge, in the United Kingdom [7] [8] and the Future of Life Institute in Cambridge, Massachusetts, in the United States. [9] [10] [11] [12]

Life

Tallinn graduated from the University of Tartu in Estonia in 1996 with a BSc in theoretical physics with a thesis that considered travelling interstellar distances using warps in spacetime.

Tallinn founded Bluemoon in Estonia alongside schoolmates Ahti Heinla and Priit Kasesalu. Bluemoon's Kosmonaut became, in 1989 (SkyRoads is the 1993 remake), the first Estonian game to be sold abroad, and earned the company US$5,000 (~$12,290 in 2023). By 1999, Bluemoon faced bankruptcy; its founders decided to acquire remote jobs for the Swedish Tele2 at a salary of US$330 (~$604.00 in 2023) each per day. The Tele2 project, "Everyday.com", was a commercial flop. Subsequently, while working as a stay-at-home father, Tallinn developed FastTrack and Kazaa for Niklas Zennström and Janus Friis (formerly of Tele2). Kazaa's P2P technology was later repurposed to drive Skype around 2003. Tallinn sold his shares in Skype in 2005, when it was purchased by eBay. [13] [8]

In 2014, he invested in the reversible debugging software for app development Undo. [14] He also made an early investment in DeepMind which was purchased by Google in 2014 for $600 million (~$761 million in 2023). [15] Other investments include Faculty, a British AI startup focused on tracking terrorists, [16] and Pactum, an "autonomous negotiation" startup based in California and Estonia. [17]

According to sources cited by the Wall Street Journal , Tallinn loaned Sam Bankman-Fried about $100 million (~$120 million in 2023), and had recalled the loan by 2018. [18]

As of 2019, Tallinn is married and has six children. [8]

Other tenures

Tallinn is a participant and donator to the effective altruism movement. [22] [23] He donated over a million dollars to the Machine Intelligence Research Institute since 2015. [24] His initial donation when co-founding the Centre for the Study of Existential Risk in 2012 was around $200,000 (~$262,438 in 2023). [8]

Views

Tallinn strongly promotes the study of existential risk and has given numerous talks on this topic. [25] His main worries are related to artificial intelligence, unknowns coming from technological development, synthetic biology and nanotechnology. [26] [27] He believes humanity is not spending enough resources on long-term planning and mitigating threats that could wipe us out as a species. [28] He has been a supporter of the Rationalist movement. [29] He has also contributed to Chatham House, supporting their work on the nuclear threat.

His views on the AI alignment problem have been influenced by the writings of Eliezer Yudkowsky. Tallinn recalls that "the overall idea that caught my attention that I never had thought about was that we are seeing the end of an era during which the human brain has been the main shaper of the future". [30] He says he's yet to meet anyone working at AI labs who thinks the risk of training the next-generation model "blowing up the planet" is less than 1%. [31]

When employees of OpenAI left to form Anthropic, primarily out of concerns that OpenAI was not focused enough on AI safety, Tallinn invested in the new company. However, he was unsure if he had made the right decision, arguing that "on the one hand, it’s great to have this safety-focused thing. On the other hand, this is proliferation". Tallinn praised Anthropic for having a greater safety focus than other AI companies, but said "that doesn’t change the fact that they’re dealing with dangerous stuff and I’m not sure if they should be. I’m not sure if anyone should be”. [32]

In March 2023, Tallinn signed an open letter from the Future of Life Institute calling for "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4", [33] [34] and in May, he signed a statement from the Center for AI Safety which read "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war". [35] [36]

Related Research Articles

Kazaa Media Desktop. was a peer-to-peer file sharing application using the FastTrack protocol licensed by Joltid Ltd. and operated as Kazaa by Sharman Networks. Kazaa was subsequently under license as a legal music subscription service by Atrinsic, Inc., which lasted until August 2012.

<span class="mw-page-title-main">Vinod Khosla</span> Indian-American businessman (born 1955)

Vinod Khosla is an Indian-American billionaire businessman and venture capitalist. He is a co-founder of Sun Microsystems and the founder of Khosla Ventures. Khosla made his wealth from early venture capital investments in areas such as networking, software, and alternative energy technologies. He is considered one of the most successful and influential venture capitalists. Khosla was named the top venture capitalist on the Forbes Midas List in 2001 and has been listed multiple times since that time. As of August 2024, Forbes estimated his net worth at US$7.2 billion.

<span class="mw-page-title-main">Nick Bostrom</span> Philosopher and writer (born 1973)

Nick Bostrom is a philosopher known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. He was the founding director of the now dissolved Future of Humanity Institute at the University of Oxford and is now Principal Researcher at the Macrostrategy Research Initiative.

<span class="mw-page-title-main">AI takeover</span> Hypothetical outcome of artificial intelligence

An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI (ASI), and the notion of a robot uprising. Stories of AI takeovers have been popular throughout science fiction, but recent advancements have made the threat more real. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

Anthropic PBC is a U.S.-based artificial intelligence (AI) public-benefit startup founded in 2021. It researches and develops AI to "study their safety properties at the technological frontier" and use this research to deploy safe, reliable models for the public. Anthropic has developed a family of large language models (LLMs) named Claude as a competitor to OpenAI's ChatGPT and Google's Gemini.

<span class="mw-page-title-main">Future of Humanity Institute</span> Defunct Oxford interdisciplinary research centre

The Future of Humanity Institute (FHI) was an interdisciplinary research centre at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and the Oxford Martin School. Its director was philosopher Nick Bostrom, and its research staff included futurist Anders Sandberg and Giving What We Can founder Toby Ord.

<span class="mw-page-title-main">Global catastrophic risk</span> Hypothetical global-scale disaster risk

A global catastrophic risk or a doomsday scenario is a hypothetical event that could damage human well-being on a global scale, even endangering or destroying modern civilization. An event that could cause human extinction or permanently and drastically curtail humanity's existence or potential is known as an "existential risk".

<span class="mw-page-title-main">Sam Altman</span> American entrepreneur and investor (born 1985)

Samuel Harris Altman is an American entrepreneur and investor best known as the chief executive officer of OpenAI since 2019. He is also the Chairman of clean energy companies Oklo Inc. and Helion Energy. Altman is considered to be one of the leading figures of the AI boom. He dropped out of Stanford University after two years and founded Loopt, a mobile social networking service, raising more than $30 million in venture capital. In 2011, Altman joined Y Combinator, a startup accelerator, and was its president from 2014 to 2019.

The Centre for the Study of Existential Risk (CSER) is a research centre at the University of Cambridge, intended to study possible extinction-level threats posed by present or future technology. The co-founders of the centre are Huw Price, Martin Rees and Jaan Tallinn.

MetaMed Research was an American medical consulting firm aiming to provide personalized medical research services. It was founded in 2012 by Michael Vassar, Jaan Tallinn, Zvi Mowshowitz, and Nevin Freeman with startup funding from Silicon Valley investor Peter Thiel. MetaMed stated that its researchers were drawn from top universities, as well as prominent technology companies such as Google. Many of its principals were associated with the Rationalist movement.

<span class="mw-page-title-main">Priit Kasesalu</span> Estonian programmer and software developer

Priit Kasesalu is an Estonian programmer and software developer best known for his participation in the development of Kazaa, Skype and, most recently, Joost. He currently works for Ambient Sound Investments and lives in Tallinn, Estonia.

<span class="mw-page-title-main">Future of Life Institute</span> International nonprofit research institute

The Future of Life Institute (FLI) is a nonprofit organization which aims to steer transformative technology towards benefiting life and away from large-scale risks, with a focus on existential risk from advanced artificial intelligence (AI). FLI's work includes grantmaking, educational outreach, and advocacy within the United Nations, United States government, and European Union institutions.

<i>Superintelligence: Paths, Dangers, Strategies</i> 2014 book by Nick Bostrom

Superintelligence: Paths, Dangers, Strategies is a 2014 book by the philosopher Nick Bostrom. It explores how superintelligence could be created and what its features and motivations might be. It argues that superintelligence, if created, would be difficult to control, and that it could take over the world in order to accomplish its goals. The book also presents strategies to help make superintelligences whose goals benefit humanity. It was particularly influential for raising concerns about existential risk from artificial intelligence.

In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential "pitfalls": artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable. The four-paragraph letter, titled "Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter", lays out detailed research priorities in an accompanying twelve-page document.

Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.

OpenAI is an American artificial intelligence (AI) research organization founded in December 2015 and headquartered in San Francisco, California. Its stated mission is to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". As a leading organization in the ongoing AI boom, OpenAI is known for the GPT family of large language models, the DALL-E series of text-to-image models, and a text-to-video model named Sora. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI.

<span class="mw-page-title-main">Effective Altruism Global</span> Annual philanthropy events

Effective Altruism Global, abbreviated EA Global or EAG, is a series of annual philanthropy events that focus on the effective altruism movement. They are organised by the Centre for Effective Altruism. Huffington Post editor Nico Pitney described one event as a gathering of "nerd altruists", which was "heavy on people from technology, science, and analytical disciplines".

Pause Giant AI Experiments: An Open Letter is the title of a letter published by the Future of Life Institute in March 2023. The letter calls "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4", citing risks such as AI-generated propaganda, extreme automation of jobs, human obsolescence, and a society-wide loss of control. It received more than 30,000 signatures, including academic AI researchers and industry CEOs such as Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak and Yuval Noah Harari.

P(doom) is a term in AI safety that refers to the probability of catastrophic outcomes (or "doom") as a result of artificial intelligence. The exact outcomes in question differ from one prediction to another, but generally allude to the existential risk from artificial general intelligence.

The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, or SB 1047, is a 2024 California bill intended to "mitigate the risk of catastrophic harms from AI models so advanced that they are not yet known to exist". Specifically, the bill would apply to models which cost more than $100 million to train and were trained using a quantity of computing power greater than 1026 integer or floating-point operations. SB 1047 would apply to all AI companies doing business in California—the location of the company does not matter. The bill creates protections for whistleblowers and requires developers to perform risk assessments of their models prior to release, under the supervision of the Government Operations Agency. It would also establish CalCompute, a University of California public cloud computing cluster for startups, researchers and community groups.

References

  1. "Jaan Tallinn, Curriculum Vitae". Tartu Ülikool Sihtasutus. May 2012. Archived from the original on 6 December 2013. Retrieved 6 September 2013.
  2. "Jaan Tallinn at Ambient Sound Investments". University of Cambridge. Retrieved 30 October 2016.
  3. "Billionaires bet on Brussels to save them from AI singularity". Politico. Retrieved 9 August 2022.
  4. "'Building AI is like launching a rocket': Meet the man fighting to stop artificial intelligence destroying humanity". ZDNET. Retrieved 2023-08-20.
  5. "Google's Acquisition Of DeepMind Could Shine A Light On Other British AI Startups". TechCrunch. 28 Jan 2014. Retrieved 16 December 2024.
  6. "Anthropic raises $124 million to build more reliable, general AI systems". Research News. Anthropic. 28 May 2021. Retrieved 16 December 2024.
  7. Lewsey, Fred (25 November 2012). "Humanity's last invention and our uncertain future". Research News. University of Cambridge . Retrieved 28 January 2013.
  8. 1 2 3 4 Hvistendahl, Mara (28 March 2019). "Can we stop AI outsmarting humanity?". The Guardian. Retrieved 29 March 2019.
  9. "Future of Life Institute".
  10. "Elon Musk Donates $10M To Make Sure AI Doesn't Go The Way Of Skynet". Mashable. 2015. Retrieved 21 Jun 2015.
  11. "Elon Musk spends $10 million to stop robot uprising (+video)". Christian Science Monitor. 2015. Retrieved 21 Jun 2015.
  12. "Elon Musk: Future of Life Institute Artificial Intelligence Research Could be Crucial". Bostinno. Retrieved 5 Jun 2015.
  13. ""How can they be so good?": The strange story of Skype". Ars Technica. 3 September 2018. Retrieved 29 March 2019.
  14. "Skype Co-Founder Jaan Tallinn Backs Reversible Debugging Startup Undo Software". TechCrunch. Retrieved 2019-09-10.
  15. Shead, Sam. "The Skype Mafia: Who Are They And Where Are They Now?". Forbes. Retrieved 2019-09-10.
  16. Field, Matthew; Boland, Hannah (29 November 2019). "Guardian venture arm invests millions in terrorist tracking AI start-up". The Telegraph. Retrieved 31 March 2020.
  17. Williams, Joe (2020). "Walmart is about to let machines negotiate contracts with some suppliers, and it's a glimpse into the future of supply chains in a post-coronavirus world". Business Insider. Retrieved 31 March 2020.
  18. Zuckerman, Patricia Kowsmann, Vicky Ge Huang, Caitlin Ostroff, and Gregory (31 December 2022). "Troubles at Sam Bankman-Fried's Alameda Began Well Before Crypto Crash". Wall Street Journal. Retrieved 2023-01-02.{{cite news}}: CS1 maint: multiple names: authors list (link)
  19. "Office of the President press announcement". Archived from the original on 2011-05-14.
  20. Weber, Harrison (1 March 2013). "Peter Thiel-backed MetaMed thinks you should have your own on-demand medical research team". TheNextWeb. Retrieved 4 April 2013.
  21. Clarke, Liat (24 April 2015). "The solution to saving healthcare systems? New feedback loops". Wired.co.uk. Retrieved 24 May 2015. Tallinn learned the importance of feedback loops himself the hard way, after seeing the demise of one of his startups, medical consulting firm Metamed.
  22. "Jaan Tallinn – Effective Altruism". Effective Altruism. Archived from the original on 2021-08-25. Retrieved 2017-07-03.
  23. "Skype inventor Jaan Tallinn wants to use Bitcoin technology to save the world". The Telegraph. Retrieved 2017-07-03.
  24. "Machine Intelligence Research Institute".
  25. "Jaan Tallinn on the Intelligence Stairway". YouTube .
  26. "A Skype founder on biomonitors, existential risk and simulated realities". The Wall Street Journal. 31 May 2013. Retrieved 2014-05-02.
  27. "Existential Risk: A Conversation with Jaan Tallinn". Edge Foundation, Inc. 16 April 2015.
  28. "Skype co-founder Jaan Tallinn on surviving the rise of the machines". Marketplace. 26 December 2012. Retrieved 2014-05-02.
  29. "I'm Jaan Tallinn, co-founder of Skype, Kazaa, CSER and MetaMed. AMA". Reddit. 7 June 2013.
  30. Pinkerton, Byrd (2019-06-19). "He co-founded Skype. Now he's spending his fortune on stopping dangerous AI". Vox. Retrieved 2024-07-24.
  31. Barten, Otto; Meindertsma, Joep (2023-07-20). "An AI Pause Is Humanity's Best Bet For Preventing Extinction". TIME. Retrieved 2024-07-24.
  32. Albergotti, Reed (Apr 28, 2023). "The co-founder of Skype invested in some of AI's hottest startups — but he thinks he failed". Semafor.
  33. "Tech chiefs call on scientists to pause development of AI systems". The Independent. 2023-03-29. Retrieved 2024-07-24.
  34. "Pause Giant AI Experiments: An Open Letter". Future of Life Institute. Retrieved 2024-07-24.
  35. Lomas, Natasha (2023-05-30). "OpenAI's Altman and other AI giants back warning of advanced AI as 'extinction' risk". TechCrunch. Retrieved 2024-07-24.
  36. "Statement on AI Risk | CAIS". www.safe.ai. Retrieved 2024-07-24.