Hanna Wallach | |
---|---|
![]() Wallach in 2016 | |
Alma mater | University of Cambridge University of Edinburgh |
Known for | Computational Social Science, Machine Learning, Fairness in Artificial Intelligence |
Scientific career | |
Fields | Computer Science |
Institutions | Microsoft Research University of Massachusetts Amherst |
Thesis | Structured topic models for language. (2008) |
Website | Personal website |
Hanna Megan Wallach (born 1979) is a computational social scientist and partner research manager at Microsoft Research. Her work makes use of machine learning models to study the dynamics of social processes. Her current research focuses on issues of fairness, accountability, transparency, and ethics as they relate to AI and machine learning.
Wallach graduated with a BA in Computer Science from Newnham College, Cambridge in 2001. [1] [2] She moved to the University of Edinburgh for her graduate studies. Here she focussed on cognitive science and machine learning. Wallach completed her doctoral research at the University of Cambridge. Her research considered language models.
Her early research considered the development of natural language processing which analyses the structure and content of social processes. [3] Wallach explained that social interactions have several things in common; structure (i.e. who is involved in the interaction), content (the information that is shared during or arises from these interactions) and dynamics (the structure and content can change over time). [4] She worked alongside journalists and computer scientists to better understand how organisations function. In 2007 she joined the University of Massachusetts Amherst, where she was made Assistant Professor in 2010. [2]
At Microsoft Research Wallach investigates fairness and transparency in machine learning. In 2020 she worked with machine learning practitioners from across the tech sector to create an artificial intelligence ethics checklist. [5] The checklist aimed to provide clear guidelines for the ethical development of artificial intelligence systems. [6]
{{cite book}}
: |work=
ignored (help)Wallach is a competitive roller derby player. [9] She is an advocate for the improved representation of women working in computer science. She was co-founder of the now annual Women in Machine Learning workshop, [13] Debian Women Project [14] and GNOME Outreach Program for Women (now Outreachy). [15] [16]
Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data, and thus perform tasks without explicit instructions. Advances in the field of deep learning have allowed neural networks to surpass many previous approaches in performance.
Andrew G. Barto is an American computer scientist, currently Professor Emeritus of computer science at University of Massachusetts Amherst. Barto is best known for his foundational contributions to the field of modern computational reinforcement learning.
The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.
Jennifer Tour Chayes is dean of the college of computing, data science, and society at the University of California, Berkeley. Before joining Berkeley, she was a technical fellow and managing director of Microsoft Research New England in Cambridge, Massachusetts, which she founded in 2008, and Microsoft Research New York City, which she founded in 2012.
Learning to rank or machine-learned ranking (MLR) is the application of machine learning, typically supervised, semi-supervised or reinforcement learning, in the construction of ranking models for information retrieval systems. Training data may, for example, consist of lists of items with some partial order specified between items in each list. This order is typically induced by giving a numerical or ordinal score or a binary judgment for each item. The goal of constructing the ranking model is to rank new, unseen lists in a similar way to rankings in the training data.
Ofer Dekel is a computer science researcher in the Machine Learning Department of Microsoft Research. He obtained his PhD in Computer Science from the Hebrew University of Jerusalem and is an affiliate faculty at the Computer Science & Engineering department at the University of Washington.
In statistics and natural language processing, a topic model is a type of statistical model for discovering the abstract "topics" that occur in a collection of documents. Topic modeling is a frequently used text-mining tool for discovery of hidden semantic structures in a text body. Intuitively, given that a document is about a particular topic, one would expect particular words to appear in the document more or less frequently: "dog" and "bone" will appear more often in documents about dogs, "cat" and "meow" will appear in documents about cats, and "the" and "is" will appear approximately equally in both. A document typically concerns multiple topics in different proportions; thus, in a document that is 10% about cats and 90% about dogs, there would probably be about 9 times more dog words than cat words. The "topics" produced by topic modeling techniques are clusters of similar words. A topic model captures this intuition in a mathematical framework, which allows examining a set of documents and discovering, based on the statistics of the words in each, what the topics might be and what each document's balance of topics is.
Eric Joel Horvitz is an American computer scientist, and Technical Fellow at Microsoft, where he serves as the company's first Chief Scientific Officer. He was previously the director of Microsoft Research Labs, including research centers in Redmond, WA, Cambridge, MA, New York, NY, Montreal, Canada, Cambridge, UK, and Bangalore, India.
Andrew McCallum is a professor in the computer science department at University of Massachusetts Amherst. His primary specialties are in machine learning, natural language processing, information extraction, information integration, and social network analysis.
Matthew Garrett is an Irish technologist, programmer, and free software activist who is a major contributor to a series of free software projects including Linux, GNOME, Debian, Ubuntu, and Red Hat. He has received the Free Software Award from the Free Software Foundation (FSF) for his work on Secure Boot, UEFI, and the Linux kernel.
Outreachy (previously the Free and Open Source Software Outreach Program for Women) is a program that organizes three-month paid internships with free and open-source software projects for people who are typically underrepresented in those projects. The program is organized by the Software Freedom Conservancy and was formerly organized by the GNOME Project and the GNOME Foundation.
Fei-Fei Li is a Chinese-American computer scientist known for establishing ImageNet, the dataset that enabled rapid advances in computer vision in the 2010s. She is the Sequoia Capital professor of computer science at Stanford University and former board director at Twitter. Li is a co-director of the Stanford Institute for Human-Centered Artificial Intelligence and a co-director of the Stanford Vision and Learning Lab. She served as the director of the Stanford Artificial Intelligence Laboratory from 2013 to 2018.
Animashree (Anima) Anandkumar is the Bren Professor of Computing at California Institute of Technology. Previously, she was a senior director of Machine Learning research at NVIDIA and a principal scientist at Amazon Web Services. Her research considers tensor-algebraic methods, deep learning and non-convex problems.
Timnit Gebru is an Eritrean Ethiopian-born computer scientist who works in the fields of artificial intelligence (AI), algorithmic bias and data mining. She is a co-founder of Black in AI, an advocacy group that has pushed for more Black roles in AI development and research. She is the founder of the Distributed Artificial Intelligence Research Institute (DAIR).
Lauren G. Wilcox is an American professor and researcher in responsible AI, human–computer interaction, and health informatics, known for research on enabling community participation in technology design and development and her prior contributions to health informatics systems.
Black in AI, formally called the Black in AI Workshop, is a technology research organization and affinity group, founded by computer scientists Timnit Gebru and Rediet Abebe in 2017. It started as a conference workshop, later pivoting into an organization. Black in AI increases the presence and inclusion of Black people in the field of artificial intelligence (AI) by creating space for sharing ideas, fostering collaborations, mentorship, and advocacy.
Margaret Mitchell is a computer scientist who works on algorithmic bias and fairness in machine learning. She is most well known for her work on automatically removing undesired biases concerning demographic groups from machine learning models, as well as more transparent reporting of their intended use.
Inioluwa Deborah Raji is a Nigerian-Canadian computer scientist and activist who works on algorithmic bias, AI accountability, and algorithmic auditing. Raji has previously worked with Joy Buolamwini, Timnit Gebru, and the Algorithmic Justice League on researching gender and racial bias in facial recognition technology. She has also worked with Google’s Ethical AI team and been a research fellow at the Partnership on AI and AI Now Institute at New York University working on how to operationalize ethical considerations in machine learning engineering practice. A current Mozilla fellow, she has been recognized by MIT Technology Review and Forbes as one of the world's top young innovators.
Himabindu "Hima" Lakkaraju is an Indian-American computer scientist who works on machine learning, artificial intelligence, algorithmic bias, and AI accountability. She is currently an Assistant Professor at the Harvard Business School and is also affiliated with the Department of Computer Science at Harvard University. Lakkaraju is known for her work on trustworthy AI and ethics of artificial intelligence. More broadly, her research focuses on developing machine learning models and algorithms that are interpretable, transparent, fair, and reliable. She also investigates the practical and ethical implications of deploying machine learning models in domains involving high-stakes decisions such as healthcare, criminal justice, business, and education. Lakkaraju was named as one of the world's top Innovators Under 35 by both Vanity Fair and the MIT Technology Review.
Barna Saha is an Indian-American theoretical computer scientist whose research interests include algorithmic applications of the probabilistic method, probabilistic databases, fine-grained complexity, and the analysis of big data. She is an associate professor and Jacobs Faculty Scholar in the Department of Computer Science & Engineering at the University of California, San Diego.
{{cite journal}}
: Cite journal requires |journal=
(help)