[[Center for AI Safety]]"},"education":{"wt":"[[University of Chicago]] (B.S.,2018)
[[UC Berkeley]] (Ph.D.,2022)"},"fields":{"wt":"{{hlist|[[Machine learning]]|[[AI safety|machine learning safety]]|[[machine ethics]]}}"}},"i":0}}]}" id="mwBA">.mw-parser-output .infobox-subbox{padding:0;border:none;margin:-3px;width:auto;min-width:100%;font-size:100%;clear:none;float:none;background-color:transparent}.mw-parser-output .infobox-3cols-child{margin:auto}.mw-parser-output .infobox .navbar{font-size:100%}@media screen{html.skin-theme-clientpref-night .mw-parser-output .infobox-full-data:not(.notheme)>div:not(.notheme)[style]{background:#1f1f23!important;color:#f8f9fa}}@media screen and (prefers-color-scheme:dark){html.skin-theme-clientpref-os .mw-parser-output .infobox-full-data:not(.notheme) div:not(.notheme){background:#1f1f23!important;color:#f8f9fa}}@media(min-width:640px){body.skin--responsive .mw-parser-output .infobox-table{display:table!important}body.skin--responsive .mw-parser-output .infobox-table>caption{display:table-caption!important}body.skin--responsive .mw-parser-output .infobox-table>tbody{display:table-row-group}body.skin--responsive .mw-parser-output .infobox-table tr{display:table-row!important}body.skin--responsive .mw-parser-output .infobox-table th,body.skin--responsive .mw-parser-output .infobox-table td{padding-left:inherit;padding-right:inherit}}
Dan Hendrycks | |
---|---|
Born | 1994or1995(age 29–30) |
Education | University of Chicago (B.S., 2018) UC Berkeley (Ph.D., 2022) |
Scientific career | |
Fields | |
Institutions | UC Berkeley Center for AI Safety |
Dan Hendrycks (born 1994or1995 [1] ) is an American machine learning researcher. He serves as the director of the Center for AI Safety, a nonprofit organization based in San Francisco, California.
Hendrycks was raised in a Christian evangelical household in Marshfield, Missouri. [2] [3] He received a B.S. from the University of Chicago in 2018 and a Ph.D. from the University of California, Berkeley in Computer Science in 2022. [4]
Hendrycks' research focuses on topics that include machine learning safety, machine ethics, and robustness.
He credits his participation in the effective altruism (EA) movement-linked 80,000 Hours program for his career focus towards AI safety, though denied being an advocate for EA. [2]
Hendrycks is the main author of the research paper that introduced the activation function GELU in 2016, [5] and of the paper that introduced the language model benchmark MMLU (Massive Multitask Language Understanding) in 2020. [6] [7]
In February 2022, Hendrycks co-authored recommendations for the US National Institute of Standards and Technology (NIST) to inform the management of risks from artificial intelligence. [8] [9]
In September 2022, Hendrycks wrote a paper providing a framework for analyzing the impact of AI research on societal risks. [10] [11] He later published a paper in March 2023 examining how natural selection and competitive pressures could shape the goals of artificial agents. [12] [13] [14] This was followed by "An Overview of Catastrophic AI Risks", which discusses four categories of risks: malicious use, AI race dynamics, organizational risks, and rogue AI agents. [15] [16]
Hendrycks is the safety adviser of xAI, an AI startup company founded by Elon Musk in 2023. To avoid any potential conflicts of interest, he receives a symbolic one-dollar salary and holds no company equity. [1] [17] In November 2024, he also joined Scale AI as an advisor collecting a one-dollar salary. [18] Hendrycks is the creator of Humanity's Last Exam, a benchmark for evaluating the capabilities of large language models, which he developed in collaboration with Scale AI. [19] [20]
In 2024 Hendrycks published a 568 page book entitled "Introduction to AI Safety, Ethics, and Society" based on courseware he had previously developed. [21]