Rachel Thomas | |
---|---|
Alma mater | Duke University (PhD) Swarthmore College |
Known for | Data ethics Artificial intelligence |
Scientific career | |
Institutions | University of San Francisco Uber |
Rachel Thomas is an American computer scientist and founding Director of the Center for Applied Data Ethics at the University of San Francisco. Together with Jeremy Howard, she is co-founder of fast.ai. Thomas was selected by Forbes magazine as one of the 20 most incredible women in artificial intelligence.
Thomas grew up in Galveston, Texas. In high school she began programming in C++. Thomas earned her bachelor's degree in mathematics at Swarthmore College in 2005. [1] At Swarthmore she was elected to the Phi Beta Delta honor society. She moved to Duke University for her graduate studies and finished her PhD in mathematics in 2010. [2] Her doctoral research involved a mathematical analysis of biochemical networks. During her doctorate she completed an internship at RTI International where she developed Markov models to evaluate HIV treatment protocols. Thomas joined Exelon as a quantitative analyst, where she scraped internet data and built models to provide information to energy traders. [3]
In 2013 Thomas joined Uber where she developed the driver interface and surge algorithms using machine learning. [4] She then became a teacher at Hackbright Academy, a school for women software engineers. [5]
Thomas joined the University of San Francisco in 2016 where she founded the Center for Applied Data Ethics. [6] [7] Here she has studied the rise of deepfakes, [8] bias in machine learning and deep learning.
When Thomas started to develop neural networks, only a few academics were doing so, and she was concerned that there was a lack of sharing of practical advice. [9] Whilst there is a considerable recruitment demand for artificial intelligence researchers, Thomas has argued that even though these careers have traditionally required a PhD, access to supercomputers and large data sets, these are not essential prerequisites. [9] To overcome this apparent skills gap, Thomas established Practical Deep Learning For Coders, the first university accredited open access certificate in deep learning, as well as creating the first open access machine learning programming library. [10] Thomas and Jeremy Howard co-founded fast.ai, a research laboratory that looks to make deep learning more accessible. [11] Her students have included a Canadian dairy farmer, African doctors and a French mathematics teacher. [4]
Thomas has studied unconscious bias in machine learning, [12] [13] and emphasised that even when race and gender are nor explicit input variables in a particular data set, algorithms can become racist and sexist when that information becomes latently encoded on other variables. [13] [14] Alongside her academic career, Thomas has called for more diverse workforces to prevent bias in systems using artificial intelligence. [9] [15] She believes that there should be more people from historically underrepresented groups working in tech to mitigate some of the harms that certain technologies may cause as well as to ensure that the systems created benefit all of society. [16] In particular, she is concerned about the retention of women and people of colour in tech jobs. [4] Thomas serves on the Board of Directors of Women in Machine Learning (WiML). [17] She served as an advisor for Deep Learning Indaba, a non-profit which looks to train African people in machine learning. In 2017 she was selected by Forbes magazine as one of 20+ "leading women" in artificial intelligence. [18]
Thomas has also written on the application of data science and machine learning in medicine. In one article, she outlines uses of machine leaning in the medical field and highlights some of the ethical issues involved. The article was published in the Boston Review, titled "Medicine's Machine Learning Problem As Big Data tools reshape health care, biased datasets and unaccountable algorithms threaten to further disempower patients." [19]
Thomas is concerned about the lack of diversity in AI, and believes that there are a lot of qualified people out there who are not getting hired. [5] She has particularly focused on the problem of poor retention of women in tech, noting that "41% of women working in tech leave within 10 years. That's over twice the attrition rate for men. And those with advanced degrees, who presumably have more options, are 176% more likely to leave." [5] Thomas believes [5] AI's "cool and exclusive aura" needs to be broken in order to unlock it for outsiders, and to make it accessible to those with non-traditional and non-elite backgrounds.
Artificial intelligence (AI) is the intelligence of machines or software, as opposed to the intelligence of humans or animals. It is also the field of study in computer science that develops and studies intelligent machines. "AI" may also refer to the machines themselves.
Machine learning (ML) is an umbrella term for solving problems for which development of algorithms by human programmers would be cost-prohibitive, and instead the problems are solved by helping machines "discover" their "own" algorithms, without needing to be explicitly told what to do by any human-developed algorithms. Recently, generative artificial neural networks have been able to surpass results of many previous approaches. Machine-learning approaches have been applied to large language models, computer vision, speech recognition, email filtering, agriculture and medicine, where it is too costly to develop algorithms to perform the needed tasks.
Artificial intelligence (AI) has been used in applications to alleviate certain problems throughout industry and academia. AI, like electricity or computers, is a general purpose technology that has a multitude of applications. It has been used in fields of language translation, image recognition, credit scoring, e-commerce and other domains.
Jeremy Howard is an Australian data scientist, entrepreneur, and educator.
Fei-Fei Li is an American computer scientist who was born in China and is known for establishing ImageNet, the dataset that enabled rapid advances in computer vision in the 2010s.
Algorithmic bias describes systematic and repeatable errors in a computer system that create "unfair" outcomes, such as "privileging" one category over another in ways different from the intended function of the algorithm.
Deepfakes are synthetic media that have been digitally manipulated to replace one person's likeness convincingly with that of another. Deepfakes are the manipulation of facial appearance through deep generative methods. While the act of creating fake content is not new, deepfakes leverage powerful techniques from machine learning and artificial intelligence to manipulate or generate visual and audio content that can more easily deceive. The main machine learning methods used to create deepfakes are based on deep learning and involve training generative neural network architectures, such as autoencoders, or generative adversarial networks (GANs). In turn the field of image forensics develops techniques to detect manipulated images.
Joy Adowaa Buolamwini is a Ghanaian-American-Canadian computer scientist and digital activist based at the MIT Media Lab. Buolamwini introduces herself as a poet of code, daughter of art and science. She founded the Algorithmic Justice League, an organization that works to challenge bias in decision-making software, using art, advocacy, and research to highlight the social implications and harms of artificial intelligence (AI).
Rumman Chowdhury is a Bangladeshi American data scientist, a business founder, and former Responsible Artificial Intelligence Lead at Accenture. She was born in Rockland County, New York.
Vivienne L’Ecuyer Ming is an American theoretical neuroscientist and artificial intelligence expert. She was named as one of the BBC 100 Women in 2017, and as one of the Financial Times' "LGBT leaders and allies today".
Animashree (Anima) Anandkumar is the Bren Professor of Computing at California Institute of Technology. She is a director of Machine Learning research at NVIDIA. Her research considers tensor-algebraic methods, deep learning and non-convex problems.
Timnit Gebru is an Eritrean Ethiopian-born computer scientist who works on algorithmic bias and data mining. She is an advocate for diversity in technology and co-founder of Black in AI, a community of Black researchers working in artificial intelligence (AI). She is the founder of the Distributed Artificial Intelligence Research Institute (DAIR).
Cynthia Diane Rudin is an American computer scientist and statistician specializing in machine learning and known for her work in interpretable machine learning. She is the director of the Interpretable Machine Learning Lab at Duke University, where she is a professor of computer science, electrical and computer engineering, statistical science, and biostatistics and bioinformatics. In 2022, she won the Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity from the Association for the Advancement of Artificial Intelligence (AAAI) for her work on the importance of transparency for AI systems in high-risk domains.
Synthetic media is a catch-all term for the artificial production, manipulation, and modification of data and media by automated means, especially through the use of artificial intelligence algorithms, such as for the purpose of misleading people or changing an original meaning. Synthetic media as a field has grown rapidly since the creation of generative adversarial networks, primarily through the rise of deepfakes as well as music synthesis, text generation, human image synthesis, speech synthesis, and more. Though experts use the term "synthetic media," individual methods such as deepfakes and text synthesis are sometimes not referred to as such by the media but instead by their respective terminology Significant attention arose towards the field of synthetic media starting in 2017 when Motherboard reported on the emergence of AI altered pornographic videos to insert the faces of famous actresses. Potential hazards of synthetic media include the spread of misinformation, further loss of trust in institutions such as media and government, the mass automation of creative and journalistic jobs and a retreat into AI-generated fantasy worlds. Synthetic media is an applied form of artificial imagination.
Olga Russakovsky is an assistant professor of computer science at Princeton University. Her research investigates computer vision and machine learning. She was one of the leaders of the ImageNet Large Scale Visual Recognition challenge and has been recognised by MIT Technology Review as one of the world's top young innovators.
fast.ai is a non-profit research group focused on deep learning and artificial intelligence. It was founded in 2016 by Jeremy Howard and Rachel Thomas with the goal of democratizing deep learning. They do this by providing a massive open online course (MOOC) named "Practical Deep Learning for Coders," which has no other prerequisites except for knowledge of the programming language Python.
The Algorithmic Justice League (AJL) is a digital advocacy non-profit organization based in Cambridge, Massachusetts. Founded in 2016 by computer scientist Joy Buolamwini, the AJL uses research, artwork, and policy advocacy to increase societal awareness regarding the use of artificial intelligence (AI) in society and the harms and biases that AI can pose to society. The AJL has engaged in a variety of open online seminars, media appearances, and tech advocacy initiatives to communicate information about bias in AI systems and promote industry and government action to mitigate against the creation and deployment of biased AI systems. In 2021, Fast Company named AJL as one of the 10 most innovative AI companies in the world.
Black in AI, formally called the Black in AI Workshop, is a technology research organization and affinity group, founded by computer scientists Timnit Gebru and Rediet Abebe in 2017. It started as a conference workshop, later pivoting into an organization. Black in AI increases the presence and inclusion of Black people in the field of artificial intelligence (AI) by creating space for sharing ideas, fostering collaborations, mentorship, and advocacy.
Jordan Harrod is an American research scientist and YouTuber who works on neuroengineering, brain-machine interfaces, and machine learning for medicine. A current graduate student at Harvard and MIT, Harrod also runs a YouTube channel to educate the public about artificial intelligence. As of January 2023, her YouTube channel has over 84 thousand subscribers and her videos have over 2 million total views.
Jake Elwes is a British media artist. Their practice is the exploration of artificial intelligence (AI), queer theory and technical biases. They are known for using AI to create art in mediums such as video, performance and installation. Their work on queering technology addresses issues caused by the normative biases of artificial intelligence.