Effective Altruism Global, abbreviated EA Global or EAG, is a series of annual philanthropy events that focus on the effective altruism movement. [1] They are organised by the Centre for Effective Altruism. [2] Huffington Post editor Nico Pitney described one event as a gathering of "nerd altruists", which was "heavy on people from technology, science, and analytical disciplines". [3] [ better source needed ]
This section needs to be updated.(September 2024) |
The first Effective Altruism Summit was hosted in 2013. [4] [ better source needed ]
In 2015, there were three main EA Global events. The largest was a three-day conference that took place on the Google campus in Mountain View, California, with speakers including entrepreneur Elon Musk, computer scientist Stuart J. Russell, and Oxford philosophy professor William MacAskill. There were also conferences in Oxford and Melbourne. According to MacAskill, the nature of the conferences improved coordination and ideological diversity within effective altruism. Talks included subjects such as global poverty, animal advocacy, cause prioritization research, and policy change. There were also workshops on career choice, Q&A sessions, and panels on running local effective altruism chapters. [5]
One of the key events of the Google conference was a moderated panel on existential risk from artificial general intelligence. [6] Panel member Stuart Russell stated that AI research should be about "building intelligent systems that benefit the human race". [5] Vox writer Dylan Matthews, while praising some aspects of the conference, criticized its perceived focus on existential risk, potentially at the expense of more mainstream causes like fighting extreme poverty. [7]
Max Erik Tegmark is a Swedish-American physicist, machine learning researcher and author. He is best known for his book Life 3.0 about what the world might look like as artificial intelligence continues to improve. Tegmark is a professor at the Massachusetts Institute of Technology and the president of the Future of Life Institute.
Jaan Tallinn is an Estonian billionaire computer programmer and investor known for his participation in the development of Skype and file-sharing application FastTrack/Kazaa.
The Future of Humanity Institute (FHI) was an interdisciplinary research centre at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and the Oxford Martin School. Its director was philosopher Nick Bostrom, and its research staff included futurist Anders Sandberg and Giving What We Can founder Toby Ord.
GiveWell is an American non-profit charity assessment and effective altruism-focused organization. GiveWell focuses primarily on the cost-effectiveness of the organizations that it evaluates, rather than traditional metrics such as the percentage of the organization's budget that is spent on overhead.
Giving What We Can (GWWC) is a group of charities promoting effective giving whose members pledge to give at least 10% of their income to effective charities. It was founded at Oxford University in 2009 by the philosopher Toby Ord, physician-in-training Bernadette Young, and fellow philosopher William MacAskill.
Holden Karnofsky is an American nonprofit executive. He is a co-founder and Director of AI Strategy of the research and grantmaking organization Open Philanthropy. Karnofsky co-founded the charity evaluator GiveWell with Elie Hassenfeld in 2007 and is vice chair of its board of directors.
Toby David Godfrey Ord is an Australian philosopher. In 2009 he founded Giving What We Can, an international society whose members pledge to donate at least 10% of their income to effective charities, and is a key figure in the effective altruism movement, which promotes using reason and evidence to help the lives of others as much as possible.
Effective altruism (EA) is a 21st-century philosophical and social movement that advocates impartially calculating benefits and prioritizing causes to provide the greatest good. It is motivated by "using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis". People who pursue the goals of effective altruism, who are sometimes called effective altruists, follow a variety of approaches proposed by the movement, such as donating to selected charities and choosing careers with the aim of maximizing positive impact. The movement has achieved significant popularity outside of academia, spurring the creation of university-based institutes, research centers, advisory organizations and charities, which, collectively, have donated several hundreds of millions of dollars.
80,000 Hours is a London-based nonprofit organisation that conducts research on which careers have the largest positive social impact and provides career advice based on that research. It provides this advice on their website, YouTube channel and podcast, and through one-on-one advice sessions. The organisation is part of the Centre for Effective Altruism, affiliated with the Oxford Uehiro Centre for Practical Ethics. The organisation's name refers to the typical amount of time someone spends working over a lifetime.
The Centre for the Study of Existential Risk (CSER) is a research centre at the University of Cambridge, intended to study possible extinction-level threats posed by present or future technology. The co-founders of the centre are Huw Price, Martin Rees and Jaan Tallinn.
Earning to give involves deliberately pursuing a high-earning career for the purpose of donating a significant portion of earned income, typically because of a desire to do effective altruism. Advocates of earning to give contend that maximizing the amount one can donate to charity is an important consideration for individuals when deciding what career to pursue.
William David MacAskill is a Scottish philosopher and author, as well as one of the originators of the effective altruism movement. He was a Research Fellow at the Global Priorities Institute at the University of Oxford, co-founded Giving What We Can, the Centre for Effective Altruism and 80,000 Hours, and is the author of Doing Good Better (2015) and What We Owe the Future (2022), and the co-author of Moral Uncertainty (2020).
The Future of Life Institute (FLI) is a nonprofit organization which aims to steer transformative technology towards benefiting life and away from large-scale risks, with a focus on existential risk from advanced artificial intelligence (AI). FLI's work includes grantmaking, educational outreach, and advocacy within the United Nations, United States government, and European Union institutions.
Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.
Doing Good Better: Effective Altruism and How You Can Make a Difference is a 2015 book by William MacAskill that serves as a primer on the effective altruism movement that seeks to do the most good. It is published by Random House and was released on July 28, 2015.
The Centre for Effective Altruism (CEA) is an Oxford-based organisation that builds and supports the effective altruism community. It was founded in 2012 by William MacAskill and Toby Ord, both philosophers at the University of Oxford. CEA is part of Effective Ventures, a federation of projects working to have a large positive impact in the world.
Igor Kurganov is a Russian born German professional poker player, angel investor and philanthropist. He is the co-founder of Raising for Effective Giving, a philanthropic organisation that promotes a rational approach to philanthropy often referred to as effective altruism, and provides advice on choosing charities based on certain criteria.
Hilary Greaves is a British philosopher, currently serving as professor of philosophy at the University of Oxford. From 2017 to 2022, she was the founding director of the Global Priorities Institute, a research centre for effective altruism at the university supported by the Open Philanthropy Project.
Longtermism is the ethical view that positively influencing the long-term future is a key moral priority of our time. It is an important concept in effective altruism and a primary motivation for efforts that aim to reduce existential risks to humanity.
TESCREAL is an acronym neologism proposed by computer scientist Timnit Gebru and philosopher Émile P. Torres that stands for "transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, and longtermism". Gebru and Torres argue that these ideologies should be treated as an "interconnected and overlapping" group with shared origins. They say this is a movement that allows its proponents to use the threat of human extinction to justify expensive or detrimental projects and consider it pervasive in social and academic circles in Silicon Valley centered around artificial intelligence. As such, the acronym is sometimes used to criticize a perceived belief system associated with Big Tech.