Rachel Thomas (academic)

Last updated
Rachel Thomas
Rachel Thomas at Linux Foundation.jpg
Rachel Thomas speaks at the Linux Foundation in 2018
Alma mater Duke University (PhD)
Swarthmore College
Known forData ethics
Artificial intelligence
Scientific career
Institutions University of San Francisco
Uber

Rachel Thomas is an American computer scientist and founding Director of the Center for Applied Data Ethics at the University of San Francisco. Together with Jeremy Howard, she is co-founder of fast.ai. Thomas was selected by Forbes magazine as one of the 20 most incredible women in artificial intelligence.

Contents

Early life and education

Thomas grew up in Galveston, Texas. In high school she began programming in C++. Thomas earned her bachelor's degree in mathematics at Swarthmore College in 2005. [1] At Swarthmore she was elected to the Phi Beta Delta honor society. She moved to Duke University for her graduate studies and finished her PhD in mathematics in 2010. [2] Her doctoral research involved a mathematical analysis of biochemical networks. During her doctorate she completed an internship at RTI International where she developed Markov models to evaluate HIV treatment protocols. Thomas joined Exelon as a quantitative analyst, where she scraped internet data and built models to provide information to energy traders. [3]

In 2013 Thomas joined Uber where she developed the driver interface and surge algorithms using machine learning. [4] She then became a teacher at Hackbright Academy, a school for women software engineers. [5]

Research and career

Thomas joined the University of San Francisco in 2016 where she founded the Center for Applied Data Ethics. [6] [7] Here she has studied the rise of deepfakes, [8] bias in machine learning and deep learning.

When Thomas started to develop neural networks, only a few academics were doing so, and she was concerned that there was a lack of sharing of practical advice. [9] Whilst there is a considerable recruitment demand for artificial intelligence researchers, Thomas has argued that even though these careers have traditionally required a PhD, access to supercomputers and large data sets, these are not essential prerequisites. [9] To overcome this apparent skills gap, Thomas established Practical Deep Learning For Coders, the first university accredited open access certificate in deep learning, as well as creating the first open access machine learning programming library. [10] Thomas and Jeremy Howard co-founded fast.ai, a research laboratory that looks to make deep learning more accessible. [11] Her students have included a Canadian dairy farmer, African doctors and a French mathematics teacher. [4]

Thomas has studied unconscious bias in machine learning, [12] [13] and emphasised that even when race and gender are nor explicit input variables in a particular data set, algorithms can become racist and sexist when that information becomes latently encoded on other variables. [13] [14] Alongside her academic career, Thomas has called for more diverse workforces to prevent bias in systems using artificial intelligence. [9] [15] She believes that there should be more people from historically underrepresented groups working in tech to mitigate some of the harms that certain technologies may cause as well as to ensure that the systems created benefit all of society. [16] In particular, she is concerned about the retention of women and people of colour in tech jobs. [4] Thomas serves on the Board of Directors of Women in Machine Learning (WiML). [17] She served as an advisor for Deep Learning Indaba, a non-profit which looks to train African people in machine learning. In 2017 she was selected by Forbes magazine as one of 20+ "leading women" in artificial intelligence. [18]

Thomas has also written on the application of data science and machine learning in medicine. In one article, she outlines uses of machine leaning in the medical field and highlights some of the ethical issues involved. The article was published in the Boston Review, titled "Medicine's Machine Learning Problem As Big Data tools reshape health care, biased datasets and unaccountable algorithms threaten to further disempower patients." [19]

Work on data ethics and diversity

Thomas is concerned about the lack of diversity in AI, and believes that there are a lot of qualified people out there who are not getting hired. [5] She has particularly focused on the problem of poor retention of women in tech, noting that "41% of women working in tech leave within 10 years. That's over twice the attrition rate for men. And those with advanced degrees, who presumably have more options, are 176% more likely to leave." [5] Thomas believes [5] AI's "cool and exclusive aura" needs to be broken in order to unlock it for outsiders, and to make it accessible to those with non-traditional and non-elite backgrounds.

Related Research Articles

<span class="mw-page-title-main">Artificial intelligence</span> Intelligence of machines or software

Artificial intelligence (AI) is the intelligence of machines or software, as opposed to the intelligence of humans or animals. It is also the field of study in computer science that develops and studies intelligent machines. "AI" may also refer to the machines themselves.

<span class="mw-page-title-main">Machine learning</span> Study of algorithms that improve automatically through experience

Machine learning (ML) is an umbrella term for solving problems for which development of algorithms by human programmers would be cost-prohibitive, and instead the problems are solved by helping machines "discover" their "own" algorithms, without needing to be explicitly told what to do by any human-developed algorithms. Recently, generative artificial neural networks have been able to surpass results of many previous approaches. Machine-learning approaches have been applied to large language models, computer vision, speech recognition, email filtering, agriculture and medicine, where it is too costly to develop algorithms to perform the needed tasks.

<span class="mw-page-title-main">Applications of artificial intelligence</span>

Artificial intelligence (AI) has been used in applications to alleviate certain problems throughout industry and academia. AI, like electricity or computers, is a general purpose technology that has a multitude of applications. It has been used in fields of language translation, image recognition, credit scoring, e-commerce and other domains.

<span class="mw-page-title-main">Jeremy Howard (entrepreneur)</span>

Jeremy Howard is an Australian data scientist, entrepreneur, and educator.

<span class="mw-page-title-main">Fei-Fei Li</span> American computer scientist (born 1976)

Fei-Fei Li is an American computer scientist who was born in China and is known for establishing ImageNet, the dataset that enabled rapid advances in computer vision in the 2010s.

<span class="mw-page-title-main">Algorithmic bias</span> Technological phenomenon with social implications

Algorithmic bias describes systematic and repeatable errors in a computer system that create "unfair" outcomes, such as "privileging" one category over another in ways different from the intended function of the algorithm.

<span class="mw-page-title-main">Deepfake</span> Artificial intelligence-based human image synthesis technique

Deepfakes are synthetic media that have been digitally manipulated to replace one person's likeness convincingly with that of another. Deepfakes are the manipulation of facial appearance through deep generative methods. While the act of creating fake content is not new, deepfakes leverage powerful techniques from machine learning and artificial intelligence to manipulate or generate visual and audio content that can more easily deceive. The main machine learning methods used to create deepfakes are based on deep learning and involve training generative neural network architectures, such as autoencoders, or generative adversarial networks (GANs). In turn the field of image forensics develops techniques to detect manipulated images.

<span class="mw-page-title-main">Joy Buolamwini</span> Computer scientist and digital activist

Joy Adowaa Buolamwini is a Ghanaian-American-Canadian computer scientist and digital activist based at the MIT Media Lab. Buolamwini introduces herself as a poet of code, daughter of art and science. She founded the Algorithmic Justice League, an organization that works to challenge bias in decision-making software, using art, advocacy, and research to highlight the social implications and harms of artificial intelligence (AI).

<span class="mw-page-title-main">Rumman Chowdhury</span> Data scientist, AI specialist

Rumman Chowdhury is a Bangladeshi American data scientist, a business founder, and former Responsible Artificial Intelligence Lead at Accenture. She was born in Rockland County, New York.

<span class="mw-page-title-main">Vivienne Ming</span> American theoretical neuroscientist

Vivienne L’Ecuyer Ming is an American theoretical neuroscientist and artificial intelligence expert. She was named as one of the BBC 100 Women in 2017, and as one of the Financial Times' "LGBT leaders and allies today".

Animashree (Anima) Anandkumar is the Bren Professor of Computing at California Institute of Technology. She is a director of Machine Learning research at NVIDIA. Her research considers tensor-algebraic methods, deep learning and non-convex problems.

<span class="mw-page-title-main">Timnit Gebru</span> Computer scientist (born 1983)

Timnit Gebru is an Eritrean Ethiopian-born computer scientist who works on algorithmic bias and data mining. She is an advocate for diversity in technology and co-founder of Black in AI, a community of Black researchers working in artificial intelligence (AI). She is the founder of the Distributed Artificial Intelligence Research Institute (DAIR).

<span class="mw-page-title-main">Cynthia Rudin</span> American computer scientist and statistician

Cynthia Diane Rudin is an American computer scientist and statistician specializing in machine learning and known for her work in interpretable machine learning. She is the director of the Interpretable Machine Learning Lab at Duke University, where she is a professor of computer science, electrical and computer engineering, statistical science, and biostatistics and bioinformatics. In 2022, she won the Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity from the Association for the Advancement of Artificial Intelligence (AAAI) for her work on the importance of transparency for AI systems in high-risk domains.

Synthetic media is a catch-all term for the artificial production, manipulation, and modification of data and media by automated means, especially through the use of artificial intelligence algorithms, such as for the purpose of misleading people or changing an original meaning. Synthetic media as a field has grown rapidly since the creation of generative adversarial networks, primarily through the rise of deepfakes as well as music synthesis, text generation, human image synthesis, speech synthesis, and more. Though experts use the term "synthetic media," individual methods such as deepfakes and text synthesis are sometimes not referred to as such by the media but instead by their respective terminology Significant attention arose towards the field of synthetic media starting in 2017 when Motherboard reported on the emergence of AI altered pornographic videos to insert the faces of famous actresses. Potential hazards of synthetic media include the spread of misinformation, further loss of trust in institutions such as media and government, the mass automation of creative and journalistic jobs and a retreat into AI-generated fantasy worlds. Synthetic media is an applied form of artificial imagination.

Olga Russakovsky is an assistant professor of computer science at Princeton University. Her research investigates computer vision and machine learning. She was one of the leaders of the ImageNet Large Scale Visual Recognition challenge and has been recognised by MIT Technology Review as one of the world's top young innovators.

fast.ai is a non-profit research group focused on deep learning and artificial intelligence. It was founded in 2016 by Jeremy Howard and Rachel Thomas with the goal of democratizing deep learning. They do this by providing a massive open online course (MOOC) named "Practical Deep Learning for Coders," which has no other prerequisites except for knowledge of the programming language Python.

<span class="mw-page-title-main">Algorithmic Justice League</span> Digital advocacy non-profit organization

The Algorithmic Justice League (AJL) is a digital advocacy non-profit organization based in Cambridge, Massachusetts. Founded in 2016 by computer scientist Joy Buolamwini, the AJL uses research, artwork, and policy advocacy to increase societal awareness regarding the use of artificial intelligence (AI) in society and the harms and biases that AI can pose to society. The AJL has engaged in a variety of open online seminars, media appearances, and tech advocacy initiatives to communicate information about bias in AI systems and promote industry and government action to mitigate against the creation and deployment of biased AI systems. In 2021, Fast Company named AJL as one of the 10 most innovative AI companies in the world.

<span class="mw-page-title-main">Black in AI</span> Technology research organization

Black in AI, formally called the Black in AI Workshop, is a technology research organization and affinity group, founded by computer scientists Timnit Gebru and Rediet Abebe in 2017. It started as a conference workshop, later pivoting into an organization. Black in AI increases the presence and inclusion of Black people in the field of artificial intelligence (AI) by creating space for sharing ideas, fostering collaborations, mentorship, and advocacy.

<span class="mw-page-title-main">Jordan Harrod</span> American scientist (born 1996)

Jordan Harrod is an American research scientist and YouTuber who works on neuroengineering, brain-machine interfaces, and machine learning for medicine. A current graduate student at Harvard and MIT, Harrod also runs a YouTube channel to educate the public about artificial intelligence. As of January 2023, her YouTube channel has over 84 thousand subscribers and her videos have over 2 million total views.

<span class="mw-page-title-main">Jake Elwes</span> British media artist

Jake Elwes is a British media artist. Their practice is the exploration of artificial intelligence (AI), queer theory and technical biases. They are known for using AI to create art in mediums such as video, performance and installation. Their work on queering technology addresses issues caused by the normative biases of artificial intelligence.

References

  1. "Rachel Thomas '05 Among Top 20 Women Advancing A.I. Research". www.swarthmore.edu. 2017-05-25. Retrieved 2019-12-18.
  2. jbmorris2 (2017-04-20). "Rachel Thomas". University of San Francisco. Retrieved 2019-12-18.
  3. "| Rachel Thomas | fast.ai founder & USF assistant professor". QCon.ai San Francisco. Retrieved 2019-12-18.
  4. 1 2 3 "Rachel Thomas, Founder of fast.ai & Assistant Professor at the University of San Francisco". OnlineEducation.com. Retrieved 2019-12-18.
  5. 1 2 3 4 Stegman, Casey. "Open Source Stories: Possible Futures". Open Source Stories. Retrieved 2019-12-24.
  6. "EGG San Francisco 2019". sf.egg.dataiku.com. Archived from the original on 2019-12-19. Retrieved 2019-12-18.
  7. States, Austin TX United (2019-08-07). "USF Launches Data Ethics Center". Datanami. Retrieved 2019-12-18.
  8. Pangburn, D. J. (2019-09-21). "You've been warned: Full body deepfakes are the next step in AI-based human mimicry". Fast Company. Retrieved 2019-12-18.
  9. 1 2 3 Snow, Jackie. "The startup diversifying the AI workforce beyond just "techies"". MIT Technology Review. Retrieved 2019-12-18.
  10. Ray, Tiernan. "Fast.ai's software could radically democratize AI". ZDNet. Retrieved 2019-12-18.
  11. "New schemes teach the masses to build AI, New schemes teach the masses to build AI". The Economist. ISSN   0013-0613 . Retrieved 2019-12-18.
  12. "Can AI Have Biases?". Techopedia.com. 2 October 2019. Retrieved 2019-12-18.
  13. 1 2 "Analyzing & Preventing Unconscious Bias in Machine Learning". InfoQ. Retrieved 2019-12-18.
  14. "BBC World Service - The Real Story, Can algorithms be trusted?". BBC. Retrieved 2019-12-18.
  15. "A tug-of-war over biased AI". Axios. 14 December 2019. Retrieved 2019-12-18.
  16. Artificial Intelligence needs all of us | Rachel Thomas P.h.D. | TEDxSanFrancisco , retrieved 2019-12-18
  17. "Board of Directors" . Retrieved 2019-12-18.
  18. Yao, Mariya. "Meet These Incredible Women Advancing A.I. Research". Forbes. Retrieved 2019-12-18.
  19. "Medicine's Machine Learning Problem".