Brian Christian | |
---|---|
![]() Christian in 2019 | |
Born | 1984 (age 39–40) Wilmington, Delaware, US |
Language | English |
Alma mater | Brown University (AB) University of Washington (MFA) University of Oxford (DPhil student) |
Notable works | The Most Human Human (2011) Algorithms to Live By (2016) The Alignment Problem (2020) |
Website | |
brianchristian |
Brian Christian (born 1984 in Wilmington, Delaware) is an American non-fiction author, poet, programmer and researcher, [1] [2] best known for a bestselling series of books about the human implications of computer science, including The Most Human Human (2011), [3] Algorithms to Live By (2016), [4] and The Alignment Problem (2020). [5]
Christian competed as a "confederate" in the 2009 Loebner Prize competition, [1] attempting to seem "more human" than the humans taking the test, and succeeded. [6] [7] The book he wrote about the experience, The Most Human Human, became a Wall Street Journal best-seller, [8] a New York Times editors' choice, [9] and a New Yorker favorite book of the year. [10] He was interviewed by Jon Stewart on The Daily Show on March 8, 2011. [11]
In 2016, Christian collaborated with cognitive scientist Tom Griffiths on the book Algorithms to Live By, which became the #1 bestselling nonfiction book on Audible [12] and was named an Amazon best science book of the year [13] and an MIT Technology Review best book of the year. [14]
His awards and honors include publication in The Best American Science and Nature Writing and fellowships at the Bread Loaf Writers' Conference, Yaddo, and MacDowell. In 2016 Christian was named a Laureate of the San Francisco Public Library. [15]
In 2020, Christian published his third book of nonfiction, The Alignment Problem , which looks at the rise of the ethics and safety movement in machine learning through historical research and the stories of approximately 100 researchers. The Alignment Problem was named a finalist for the Los Angeles Times Book Prize for best science and technology book of the year. [16] The New York Times in 2024 named The Alignment Problem one of the "5 Best Books About Artificial Intelligence," writing: "If you're going to read one book on artificial intelligence, this is the one." [17] For his work on The Alignment Problem, Christian received the Eric and Wendy Schmidt Award for Excellence in Science Communication, given by The National Academies of Sciences, Engineering, and Medicine in partnership with Schmidt Futures. [18]
Christian is a native of Little Silver, New Jersey. [19] He attended high school at High Technology High School in Lincroft, NJ. [20]
Christian holds a degree from Brown University in computer science and philosophy, and an MFA in poetry from the University of Washington. [3]
Beginning in 2012, Christian has been a visiting scholar at the University of California, Berkeley. At UC Berkeley, he is affiliated with a number of research groups, including the Institute of Cognitive and Brain Sciences, [21] the Center for Information Technology Research in the Interest of Society, [22] the Center for Human-Compatible Artificial Intelligence, [23] and the Simons Institute for the Theory of Computing. [24] In 2023, he was awarded a Clarendon Scholarship to study experimental psychology at Lincoln College at the University of Oxford. [25]
In 2010, film director Michael Langan adapted Christian's poem "Heliotropes" into a short film of the same name, which was published in the final issue of Wholphin magazine. [26]
In 2014, Vanity Fair magazine reported that The Most Human Human was the "night-table reading" of Elon Musk. [27]
Reading The Most Human Human inspired the playwright Jordan Harrison to write the play Marjorie Prime . [28] The play was a finalist for the Pulitzer Prize [29] and was released as a feature film in 2017.
The Most Human Human also inspired filmmaker Tommy Pallotta's 2018 documentary More Human Than Human, in which Christian appears. [30]
In 2018, Algorithms to Live By was featured as an answer on the game show Jeopardy! . [31]
In 2021, Microsoft CEO Satya Nadella wrote in Fast Company that The Alignment Problem was one of the "5 books that inspired" him that year. [32]
Writer Peter Brown has cited The Most Human Human as an inspiration for his book series The Wild Robot , which was adapted into the 2024 film of the same name. [33] [34]
Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.
In the field of artificial intelligence (AI), tasks that are hypothesized to require artificial general intelligence to solve are informally known as AI-complete or AI-hard. Calling a problem AI-complete reflects the belief that it cannot be solved by a simple specific algorithm.
The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of self-improvement cycles, each successive; and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which would ultimately result in a powerful superintelligence, qualitatively far surpassing all human intelligence.
The Age of Spiritual Machines: When Computers Exceed Human Intelligence is a non-fiction book by inventor and futurist Ray Kurzweil about artificial intelligence and the future course of humanity. First published in hardcover on January 1, 1999, by Viking, it has received attention from The New York Times, The New York Review of Books and The Atlantic. In the book Kurzweil outlines his vision for how technology will progress during the 21st century.
Swarm intelligence (SI) is the collective behavior of decentralized, self-organized systems, natural or artificial. The concept is employed in work on artificial intelligence. The expression was introduced by Gerardo Beni and Jing Wang in 1989, in the context of cellular robotic systems.
An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI (ASI), and the notion of a robot uprising. Stories of AI takeovers have been popular throughout science fiction, but recent advancements have made the threat more real. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.
The Age of Intelligent Machines is a non-fiction book about artificial intelligence by inventor and futurist Ray Kurzweil. This was his first book and the Association of American Publishers named it the Most Outstanding Computer Science Book of 1990. It was reviewed in The New York Times and The Christian Science Monitor. The format is a combination of monograph and anthology with contributed essays by artificial intelligence experts such as Daniel Dennett, Douglas Hofstadter, and Marvin Minsky.
The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.
There are a number of competitions and prizes to promote research in artificial intelligence.
Hartmut Neven is a scientist working in quantum computing, computer vision, robotics and computational neuroscience. He is best known for his work in face and object recognition and his contributions to quantum machine learning. He is currently Vice President of Engineering at Google where he is leading the Quantum Artificial Intelligence Lab which he founded in 2012.
Eric Joel Horvitz is an American computer scientist, and Technical Fellow at Microsoft, where he serves as the company's first Chief Scientific Officer. He was previously the director of Microsoft Research Labs, including research centers in Redmond, WA, Cambridge, MA, New York, NY, Montreal, Canada, Cambridge, UK, and Bangalore, India.
Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. It should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with technology's grander social effects.
Martin Ford is an American futurist and author focusing on artificial intelligence and robotics, and the impact of these technologies on the job market, economy and society.
Stephanie Dinkins is a transdisciplinary American artist based in Brooklyn, New York. She creates art about artificial intelligence (AI) as it intersects race, gender, and history.
Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning. For the subset of AI algorithms, the term regulation of artificial intelligence is used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally, including in the European Union. Regulation of AI is considered necessary to both encourage AI and manage associated risks, but challenging. Another emerging topic is the regulation of blockchain algorithms and is mentioned along with regulation of AI algorithms. Many countries have enacted regulations of high frequency trades, which is shifting due to technological progress into the realm of AI algorithms.
Regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI). It is part of the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct enforcement power like the IEEE or the OECD.
VITAL was a Board Management Software machine learning proprietary software developed by Aging Analytics, a company registered in Bristol (England) and dissolved in 2017. Andrew Garazha declared that the project aimed "through iterative releases and updates to create a piece of software capable of making autonomous investment decisions." According to Nick Dyer-Witheford, VITAL 1.0 was a "basic algorithm".
Thomas L. Griffiths is an Australian academic who is the Henry R. Luce Professor of Information Technology, Consciousness, and Culture at Princeton University. He studies human decision-making and its connection to problem-solving methods in computation. His book with Brian Christian, Algorithms to Live By: The Computer Science of Human Decisions, was named one of the "Best Books of 2016" by MIT Technology Review.
The Alignment Problem: Machine Learning and Human Values is a 2020 non-fiction book by the American writer Brian Christian. It is based on numerous interviews with experts trying to build artificial intelligence systems, particularly machine learning systems, that are aligned with human values.
Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence is a book by Australian academic Kate Crawford. It is based on Crawford's research into the development and labor behind artificial intelligence, as well as AI's impact on the world.