Effective Altruism Global

Last updated
Panel with Devon Fritz, Joan Gass, Jona Glade, Meg Tong, and Anneke Pogarell during EA Global 2021 Workplace Groups panel at Effective Altruism Global 2021.jpg
Panel with Devon Fritz, Joan Gass, Jona Glade, Meg Tong, and Anneke Pogarell during EA Global 2021

Effective Altruism Global, abbreviated EA Global or EAG, is a series of philanthropy conferences that focuses on the effective altruism movement. [1] The conferences are run by the Centre for Effective Altruism. [2] Huffington Post editor Nico Pitney described the events as a gathering of "nerd altruists", which was "heavy on people from technology, science, and analytical disciplines". [3]

Contents

History

The first Effective Altruism Summit was hosted in 2013. [4]

In 2015, there were three main EA Global events. The largest was a three-day conference that took place on the Google campus in Mountain View, California, with speakers including entrepreneur Elon Musk, computer scientist Stuart J. Russell, and Oxford philosophy professor William MacAskill. There were also conferences in Oxford and Melbourne. According to MacAskill, the nature of the conferences improved coordination and ideological diversity within effective altruism. Talks included subjects such as global poverty, animal advocacy, cause prioritization research, and policy change. There were also workshops on career choice, Q&A sessions, and panels on running local effective altruism chapters. [5]

Panel with Nick Bostrom, Elon Musk, Nate Soares, and Stuart Russell during EA Global 2015 AI Existential Risk panel at EA Global.jpg
Panel with Nick Bostrom, Elon Musk, Nate Soares, and Stuart Russell during EA Global 2015

One of the key events of the Google conference was a moderated panel on existential risk from artificial general intelligence. [6] Panel member Stuart Russell stated that AI research should be about "building intelligent systems that benefit the human race". [5] Vox writer Dylan Matthews, while praising some aspects of the conference, criticized its perceived focus on existential risk, potentially at the expense of more mainstream causes like fighting extreme poverty. [7]

Since 2016, conferences have taken place at Harvard University, University of California, Berkeley, The Palace of Fine Arts, Imperial College London and other venues in Boston, Berkeley, San Francisco and London. Because of the COVID-19 pandemic, no events were held in 2020; instead, a number of virtual conferences and virtual programs were offered. Activity resumed in late October 2021, with an event held at The Brewery in London. [8] Three EA Global events are planned for 2022. [9]


Related Research Articles

<span class="mw-page-title-main">Max Tegmark</span> Swedish-American cosmologist

Max Erik Tegmark is a Swedish-American physicist, cosmologist and machine learning researcher. He is a professor at the Massachusetts Institute of Technology and the president of the Future of Life Institute. He is also a scientific director at the Foundational Questions Institute and a supporter of the effective altruism movement.

<span class="mw-page-title-main">Jaan Tallinn</span> Estonian programmer and investor

Jaan Tallinn is an Estonian billionaire computer programmer and investor known for his participation in the development of Skype and file-sharing application FastTrack/Kazaa. Jaan Tallinn is a leading figure in the field of existential risk, having co-founded both the Centre for the Study of Existential Risk (CSER), and the Future of Life Institute.

<span class="mw-page-title-main">Future of Humanity Institute</span> Oxford interdisciplinary research centre

The Future of Humanity Institute (FHI) is an interdisciplinary research centre at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and the Oxford Martin School. Its director is philosopher Nick Bostrom, and its research staff and associates include futurist Anders Sandberg, engineer K. Eric Drexler, economist Robin Hanson, and Giving What We Can founder Toby Ord.

GiveWell is an American non-profit charity assessment and effective altruism-focused organization. GiveWell focuses primarily on the cost-effectiveness of the organizations that it evaluates, rather than traditional metrics such as the percentage of the organization's budget that is spent on overhead.

<span class="mw-page-title-main">Liv Boeree</span> British poker player, television presenter, speaker, and writer

Olivia "Liv" Boeree is a British science communicator, television presenter and former professional poker player. She is a World Series of Poker and European Poker Tour champion, and is the only female player in history to win both a WSOP bracelet and an EPT event. Boeree is a 3× winner of the Global Poker Index European Female Player of the year. As of September 2021, having retired in late 2019, Boeree still ranks among the top ten women in poker history in terms of all-time money winnings.

<span class="mw-page-title-main">Global catastrophic risk</span> Potentially harmful worldwide events

A global catastrophic risk or a doomsday scenario is a hypothetical future event that could damage human well-being on a global scale, even endangering or destroying modern civilization. An event that could cause human extinction or permanently and drastically curtail humanity's potential is known as an "existential risk."

<i>LessWrong</i> Rationality-focused community blog

LessWrong is a community blog and forum focused on discussion of cognitive biases, philosophy, psychology, economics, rationality, and artificial intelligence, among other topics.

<span class="mw-page-title-main">Toby Ord</span> Australian philosopher (born 1979)

Toby David Godfrey Ord is an Australian philosopher. He founded Giving What We Can in 2009, an international society whose members pledge to donate at least 10% of their income to effective charities, and is a key figure in the effective altruism movement, which promotes using reason and evidence to help the lives of others as much as possible. He is a senior research fellow at the University of Oxford's Future of Humanity Institute, where his work is focused on existential risk. His book on the subject The Precipice: Existential Risk and the Future of Humanity was published in March 2020.

Effective altruism is a philosophical and social movement that advocates "using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis". People who pursue the goals of effective altruism, called effective altruists, often choose careers based on the amount of good that the career achieves while donating to charities based on maximising impact. The movement developed during the 2000s, and the name effective altruism was coined in 2011. Prominent philosophers influential to the movement include Peter Singer, Toby Ord, and William MacAskill. Several books and many articles about the movement have since been published, and the Effective Altruism Global conference has been held since 2013. As of 2022, several billion dollars have been committed to effective altruist causes.

<span class="mw-page-title-main">80,000 Hours</span> Non-profit organisation that conducts research on which jobs have most positive social impact

80,000 Hours is a London-based nonprofit organisation that conducts research on which careers have the largest positive social impact and provides career advice based on that research. It provides this advice on their website and podcast, and through one-on-one advice sessions. The organisation is part of the Centre for Effective Altruism, affiliated with the Oxford Uehiro Centre for Practical Ethics. The organisation's name refers to the typical amount of time someone spends working over a lifetime.

The Centre for the Study of Existential Risk (CSER) is a research centre at the University of Cambridge, intended to study possible extinction-level threats posed by present or future technology. The co-founders of the centre are Huw Price, Martin Rees and Jaan Tallinn.

Earning to give involves deliberately pursuing a high-earning career for the purpose of donating a significant portion of earned income, typically because of a desire to do effective altruism. Advocates of earning to give contend that maximizing the amount one can donate to charity is an important consideration for individuals when deciding what career to pursue.

<span class="mw-page-title-main">William MacAskill</span> Scottish philosopher and ethicist

William David MacAskill is a Scottish philosopher and author, along with being one of the originators of the effective altruism movement. He is an Associate Professor in Philosophy and Research Fellow at the Global Priorities Institute at the University of Oxford, and Director of the Forethought Foundation for Global Priorities Research. MacAskill is also the co-founder of Giving What We Can, the Centre for Effective Altruism and 80,000 Hours. He is the author of the 2015 book Doing Good Better, the 2022 book What We Owe the Future, and co-author of the 2020 book Moral Uncertainty.

<span class="mw-page-title-main">Future of Life Institute</span> International nonprofit research institute

The Future of Life Institute (FLI) is a nonprofit organization that works to reduce global catastrophic and existential risks facing humanity, particularly existential risk from advanced artificial intelligence (AI). The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. Its founders include MIT cosmologist Max Tegmark and Skype co-founder Jaan Tallinn, and its advisors include entrepreneur Elon Musk.

Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could result in human extinction or some other unrecoverable global catastrophe. It is argued that the human species currently dominates other species because the human brain has some distinctive capabilities that other animals lack. If AI surpasses humanity in general intelligence and becomes "superintelligent", then it could become difficult or impossible for humans to control. Just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence.

<i>Doing Good Better</i> 2015 book about effective altruism by William MacAskill

Doing Good Better: Effective Altruism and How You Can Make a Difference is a 2015 book by William MacAskill that serves as a primer on the effective altruism movement that seeks to do the most good. It is published by Random House and was released on July 28, 2015.

<span class="mw-page-title-main">Centre for Effective Altruism</span> Non-profit effective altruist organization

The Centre for Effective Altruism (CEA) is an Oxford-based charity that builds and supports the effective altruism community. It was founded in 2012 by William MacAskill and Toby Ord, both philosophers at the University of Oxford. CEA is part of Effective Ventures, a federation of projects working to have a large positive impact in the world.

<span class="mw-page-title-main">Igor Kurganov</span> Russian poker player

Igor Kurganov is a Russian professional poker player, angel investor and philanthropist. He is the co-founder of Raising for Effective Giving, a philanthropic organisation that promotes a rational approach to philanthropy often referred to as effective altruism, and provides advice on choosing charities based on certain criteria.

Charity assessment is the process of analysis of the goodness of a non-profit organization in financial terms. Historically, charity evaluators have focused on the question of how much of contributed funds are used for the purpose(s) claimed by the charity, while more recently some evaluators have placed an emphasis on the cost effectiveness of charities.

<span class="mw-page-title-main">Longtermism</span> Philosophical view which prioritises the long-term future

Longtermism is an ethical stance which gives priority to improving the long-term future. It is an important concept in effective altruism and serves as a primary motivation for efforts to reduce existential risks to humanity.

References

  1. "Centre for Effective Altruism Announces Lineup for Effective Altruism Global 2015". Pitchengine. Centre for Effective Altruism. Retrieved 4 April 2016.
  2. "EA Global". EA Global. Centre for Effective Altruism. Retrieved 4 Feb 2022.
  3. Pitney, Nico (July 16, 2015). "Elon Musk To Address 'Nerd Altruists' At Google HQ". Huffington Post . Retrieved 4 April 2016.
  4. Tallinn, Jaan (2013-11-13). "Jaan Tallinn's Keynote - Effective Altruism Summit 2013". Exponential Times. Archived from the original on 2019-08-16. Retrieved 2022-11-25.
  5. 1 2 Guan, Melody Y. (August 3, 2015). "Elon Musk, Superintelligence, and Maximizing Social Good: A Weekend at History's Largest Gathering of "Effective Altruists"". Huffington Post . Retrieved 4 April 2016.
  6. Townsend, Tess (August 3, 2015). "Elon Musk: AI Is Going to Happen. Let's Prepare For It". Inc. Magazine . Retrieved 4 April 2016.
  7. Matthews, Dylan (August 10, 2015). "I spent a weekend at Google talking with nerds about charity. I came away … worried". Vox Media . Retrieved 4 April 2016.
  8. Effective Altruism Global (2021). "Events". Effective Altruism Global. Retrieved 2021-12-24.
  9. Vaintrob, Lizka; Wiley, Amy (2021-12-15). "EA conferences in 2022: save the dates". Effective Altruism Forum. Retrieved 2021-12-24.