Amanda Askell

Last updated
Amanda Askell
Spouse
(divorced)
Awards Time 100 AI (2024)
Education
Education
Thesis Pareto Principles in Infinite Ethics  (2018)
Doctoral advisors
Notable worksConstitutional AI framework
Notable ideas
Website askell.io

Amanda Askell is a Scottish philosopher and AI researcher. She has served as the head of the personality alignment team at Anthropic since 2021. She has played a large role in the development of Claude's personality and constitution. [1] In 2024, she was on the TIME100 AI list. [2] She previously worked at OpenAI, but left over concerns that the company was not prioritizing AI safety enough. [3] [4] She has published over 60 papers and has received over 170,000 citations. [5]

Contents

Early life and education

Askell received a BPhil degree in Philosophy from the University of Oxford [6] and a PhD degree in Philosophy from New York University in 2018. [4] Her doctoral thesis argues that rankings of worlds containing infinitely many agents, when constrained by certain plausible axioms, create puzzles for a wide range of ethical theories. [7]

Career

OpenAI (2018–2021)

After completing her PhD, Askell joined OpenAI in November 2018 as a Research Scientist on the policy team. [8] At OpenAI, she focused on AI development races between organizations and how they can avoid being adversarial, as well as examining the intersection between policy questions and AI safety. [8] She left OpenAI in February 2021, reportedly due to safety concerns. [3]

Anthropic (2021–present)

Askell joined Anthropic in March 2021 as a Member of Technical Staff, focusing on alignment and finetuning. [9] She currently leads the personality alignment team, where she is responsible for training Anthropic's Claude model to exhibit positive character traits, such as curiosity, and for developing new techniques for model finetuning. [2]

Research

Moral self-correction

In a 2023 paper co-authored with Deep Ganguli, Askell explored "moral self-correction" in large language models: the capacity of these systems to reduce harmful outputs when given natural language instructions to do so. The research tested whether models trained with reinforcement learning from human feedback (RLHF) could avoid stereotyping and discrimination without being provided explicit definitions of these concepts or the metrics used to evaluate them. [10]

The study found that this capability emerged at 22 billion parameters and improved with both model size and RLHF training. Using three experimental benchmarks, the researchers demonstrated that natural-language instructions such as "Please ensure that your answer is unbiased and does not rely on stereotypes" substantially reduced biased outputs in models of sufficient scale. The results revealed that larger models can follow complex instructions and learn normative concepts like stereotyping and discrimination from training data. [10] [11]

Constitutional AI

Askell has been a key contributor to the development of Constitutional AI (CAI), a method for training AI systems to meet the standards of harmlessness and helpfulness using AI feedback rather than extensive human oversight. [12] The approach involves providing AI models with a set of principles, or "constitution", to guide their behavior, allowing them to critique and revise their own responses based on these principles. [13]

Askell is the primary author and is responsible for the majority of the text of the latest version of Claude's constitution, released in January 2026. [14] [15] The document is designed to address the growing capabilities and emerging risks of advanced AI models. [1] [16] She has described her work as focusing on helping models "understand and grapple with the constitution" through synthetic data generation and reinforcement learning techniques. [1]

Personal life

Askell was married to philosopher William MacAskill. [17] [18] She is a member of Giving What We Can. [19]

References

  1. 1 2 3 Sullivan, Mark (2026-01-22). "A Q&A with Amanda Askell, the lead author of Anthropic's new 'constitution' for AIs". Fast Company. Archived from the original on 2026-01-23. Retrieved 2026-01-24.
  2. 1 2 Perrigo, Billy (2024-09-05). "Amanda Askell". Time.
  3. 1 2 "Time 100 AI list contains at least 5 people who quit OpenAI due to safety concerns". 2024-09-09. Retrieved 2026-01-24.
  4. 1 2 "Philosophy Department Graduate Placement Record". New York University. Retrieved 2026-01-24.
  5. "Amanda Askell". Google Scholar. Retrieved 2026-01-24.
  6. "Amanda Askell". Berkman Klein Center for Internet & Society. Harvard University. Retrieved 2026-01-28.
  7. Askell, Amanda (2018). Pareto Principles in Infinite Ethics (PDF) (Ph.D.). New York University.
  8. 1 2 Robert Wiblin (2019-03-19). "Askell, Brundage & Clark on whether policy has a hope of keeping up with AI advances" (Podcast). 80,000 Hours Podcast. No. 54. Retrieved 2026-01-28.
  9. "Amanda Askell - Member Of Technical Staff at Anthropic". The Org. Retrieved 2026-01-28.
  10. 1 2 Ganguli, Deep; Askell, Amanda; Schiefer, Nicholas; Liao, Thomas; Lukošiūtė, Kamilė; Chen, Anna; Goldie, Anna; Mirhoseini, Azalia (2023-02-15). "The Capacity for Moral Self-Correction in Large Language Models". arXiv: 2302.07459 [cs.CL].
  11. Knight, Will (2023-03-20). "Language models may be able to self-correct biases—if you ask them to". MIT Technology Review. Retrieved 2026-01-28.
  12. Bai, Yuntao; Kadavath, Saurav; Kundu, Sandipan; Askell, Amanda (2022-12-15). "Constitutional AI: Harmlessness from AI Feedback". arXiv: 2212.08073 [cs.CL].
  13. Edwards, Benj (2023-05-09). "AI gains "values" with Anthropic's new Constitutional AI chatbot approach". Ars Technica. Retrieved 2026-01-29.
  14. Samuel, Sigal (2026-01-28). "Claude has an 80-page "soul document." Is that enough to make it good?". Vox. Retrieved 2026-01-28.
  15. "Claude's Constitution". Anthropic. Retrieved 2026-01-28.
  16. Ostrovsky, Nikita; Perrigo, Billy (2026-01-21). "How Do You Teach an AI to Be Good? Anthropic Just Published Its Answer". TIME. Retrieved 2026-01-27.
  17. Bajekal, Naina (August 10, 2022). "Want to Do More Good? This Movement Might Have the Answer". Time. Retrieved 2026-01-28.
  18. Levy, Steven (March 28, 2025). "If Anthropic Succeeds, a Nation of Benevolent AI Geniuses Could Be Born". Wired. Retrieved 2026-01-28.
  19. "Members". Giving What We Can. Retrieved 2026-01-28.