Lorien Pratt

Last updated

Lorien Pratt is an American computer scientist known for inventing two disciplines: machine learning transfer [1] and decision intelligence. [2] She is chief scientist and founder of Quantellia. [3] Since 1988, she has conducted research on the use of machine learning as an academic, professor, industry analyst, and practicing data scientist. [4] Pratt received her AB degree in Computer Science from Dartmouth College and her Masters and doctorate degrees in Computer Science from Rutgers University.

Contents

Learning to Learn

She is best known for her book "Learning to Learn [5] ," co-edited with Sebastian Thrun, which provided an overview on how to use machine learning to better understand bias and generalization of discrete subjects. This approach, still largely theoretical when the book was published in 1998, is also called metalearning and is now a foundational underpinning of machine learning algorithms such as GPT-3 and DALL-E.

Research

Transfer learning

Pratt's research includes early work in transfer learning where she developed the discriminability-based transfer (DBT) algorithm in 1993 during her tenure as a professor of computer science at Colorado School of Mines. This paper is considered one of the earliest academic works referring to the use of transfer in machine learning and has been cited over 400 times as foundational research for deep neural networks. [6]

Decision intelligence

Since then, Pratt's research has continued to explore the relationships between machine learning and human cognition with the concept of decision intelligence, an emerging field of machine learning guided analytics designed to support human decision. Pratt introduced this concept in 2008, [7] and this term has since been used by a number of vendors providing machine learning-guided analytics including Diwo, Peak AI, Sisu, and Tellius as the technologies used to support machine learning at scale have become easier to deploy, manage, and embed into software platforms. Pratt's work is cited as a core starting point for defining modern aspects of decision intelligence. [8]

Pratt's work at Quantellia since 2020 has focused on the use of decision intelligence to improve COVID-19-based outcomes.

Related Research Articles

<span class="mw-page-title-main">Neural network (machine learning)</span> Computational model used in machine learning, based on connected, hierarchical functions

In machine learning, a neural network is a model inspired by the structure and function of biological neural networks in animal brains.

Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data, and thus perform tasks without explicit instructions. Recently, artificial neural networks have been able to surpass many previous approaches in performance.

<span class="mw-page-title-main">Sebastian Thrun</span> German-American entrepreneur

Sebastian Thrun is a German-American entrepreneur, educator, and computer scientist. He is CEO of Kitty Hawk Corporation, and chairman and co-founder of Udacity. Before that, he was a Google VP and Fellow, a Professor of Computer Science at Stanford University, and before that at Carnegie Mellon University. At Google, he founded Google X and Google's self-driving car team. He is also an adjunct professor at Stanford University and at Georgia Tech.

<span class="mw-page-title-main">Quantum neural network</span> Quantum Mechanics in Neural Networks

Quantum neural networks are computational neural network models which are based on the principles of quantum mechanics. The first ideas on quantum neural computation were published independently in 1995 by Subhash Kak and Ron Chrisley, engaging with the theory of quantum mind, which posits that quantum effects play a role in cognitive function. However, typical research in quantum neural networks involves combining classical artificial neural network models with the advantages of quantum information in order to develop more efficient algorithms. One important motivation for these investigations is the difficulty to train classical neural networks, especially in big data applications. The hope is that features of quantum computing such as quantum parallelism or the effects of interference and entanglement can be used as resources. Since the technological implementation of a quantum computer is still in a premature stage, such quantum neural network models are mostly theoretical proposals that await their full implementation in physical experiments.

<span class="mw-page-title-main">Transfer learning</span> Machine learning technique

Transfer learning (TL) is a technique in machine learning (ML) in which knowledge learned from a task is re-used in order to boost performance on a related task. For example, for image classification, knowledge gained while learning to recognize cars could be applied when trying to recognize trucks. This topic is related to the psychological literature on transfer of learning, although practical ties between the two fields are limited. Reusing/transferring information from previously learned tasks to new tasks has the potential to significantly improve learning efficiency.

The following outline is provided as an overview of and topical guide to artificial intelligence:

<span class="mw-page-title-main">Decision intelligence</span> Subfield of machine learning

Decision intelligence is an engineering discipline that augments data science with theory from social science, decision theory, and managerial science. Its application provides a framework for best practices in organizational decision-making and processes for applying Artificial Intelligence technologies as machine learning, natural language processing, reasoning and semantics at scale. The basic idea is that decisions are based on our understanding of how actions lead to outcomes. Decision intelligence is a discipline for analyzing this chain of cause and effect, and decision modeling is a visual language for representing these chains.

<span class="mw-page-title-main">Andrew Ng</span> American artificial intelligence researcher

Andrew Yan-Tak Ng is a British-American computer scientist and technology entrepreneur focusing on machine learning and artificial intelligence (AI). Ng was a cofounder and head of Google Brain and was the former Chief Scientist at Baidu, building the company's Artificial Intelligence Group into a team of several thousand people.

This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence, its sub-disciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.

<span class="mw-page-title-main">Ian Goodfellow</span> American computer scientist

Ian J. Goodfellow is an American computer scientist, engineer, and executive, most noted for his work on artificial neural networks and deep learning. He was previously employed as a research scientist at Google Brain and director of machine learning at Apple and has made several important contributions to the field of deep learning including the invention of the generative adversarial network (GAN). Goodfellow co-wrote, as the first author, the textbook Deep Learning (2016) and wrote the chapter on deep learning in the authoritative textbook of the field of artificial intelligence, Artificial Intelligence: A Modern Approach.

<span class="mw-page-title-main">Artificial intelligence in healthcare</span> Overview of the use of artificial intelligence in healthcare

Artificial intelligence in healthcare is a term used to describe the use of machine-learning algorithms and software, or artificial intelligence (AI), to copy human cognition in the analysis, presentation, and understanding of complex medical and health care data, or to exceed human capabilities by providing new ways to diagnose, treat, or prevent disease. Specifically, AI is the ability of computer algorithms to arrive at approximate conclusions based solely on input data.

The following outline is provided as an overview of and topical guide to machine learning:

<span class="mw-page-title-main">Joëlle Pineau</span> Canadian computer scientist (born 1974)

Joëlle Pineau is a Canadian computer scientist and Associate Professor at McGill University. She is the global Vice President of Facebook Artificial Intelligence Research (FAIR), now known as Meta AI, and is based in Montreal, Quebec. She was elected to the Fellow of the Royal Society of Canada in 2023.

Geoffrey J. Gordon is a professor at the Machine Learning Department at Carnegie Mellon University in Pittsburgh and director of research at the Microsoft Montréal lab. He is known for his research in statistical relational learning and on anytime dynamic variants of the A* search algorithm. His research interests include multi-agent planning, reinforcement learning, decision-theoretic planning, statistical models of difficult data, computational learning theory, and game theory.

Multi-task optimization is a paradigm in the optimization literature that focuses on solving multiple self-contained tasks simultaneously. The paradigm has been inspired by the well-established concepts of transfer learning and multi-task learning in predictive analytics.

<span class="mw-page-title-main">Suchi Saria</span> Indian scientist

Suchi Saria is an Associate Professor of Machine Learning and Healthcare at Johns Hopkins University, where she uses big data to improve patient outcomes. She is a World Economic Forum Young Global Leader. From 2022 to 2023, she was an investment partner at AIX Ventures. AIX Ventures is a venture capital fund that invests in artificial intelligence startups.

Deep reinforcement learning is a subfield of machine learning that combines reinforcement learning (RL) and deep learning. RL considers the problem of a computational agent learning to make decisions by trial and error. Deep RL incorporates deep learning into the solution, allowing agents to make decisions from unstructured input data without manual engineering of the state space. Deep RL algorithms are able to take in very large inputs and decide what actions to perform to optimize an objective. Deep reinforcement learning has been used for a diverse set of applications including but not limited to robotics, video games, natural language processing, computer vision, education, transportation, finance and healthcare.

<span class="mw-page-title-main">Cynthia Rudin</span> American computer scientist and statistician

Cynthia Diane Rudin is an American computer scientist and statistician specializing in machine learning and known for her work in interpretable machine learning. She is the director of the Interpretable Machine Learning Lab at Duke University, where she is a professor of computer science, electrical and computer engineering, statistical science, and biostatistics and bioinformatics. In 2022, she won the Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity from the Association for the Advancement of Artificial Intelligence (AAAI) for her work on the importance of transparency for AI systems in high-risk domains.

Isabelle Guyon is a French-born researcher in machine learning known for her work on support-vector machines, artificial neural networks and bioinformatics. She is a Chair Professor at the University of Paris-Saclay.

<span class="mw-page-title-main">Chelsea Finn</span> American computer scientist and academic

Chelsea Finn is an American computer scientist and assistant professor at Stanford University. Her research investigates intelligence through the interactions of robots, with the hope to create robotic systems that can learn how to learn. She is part of the Google Brain group.

References

  1. Pratt, L. (1996). "Special Issue: Reuse of Neural Networks through Transfer". Connection Science. 8 (2). Retrieved 2017-08-10.
  2. Lorien Pratt: "Link: How Decision Intelligence Connects Data, Actions, and Outcomes for a Better World". Emerald Press, 2019
  3. "Quantellia company website".
  4. "Lorien Pratt". scholar.google.com. Retrieved 2022-08-04.
  5. Thrun, Sebastian (1998). Learning to Learn. Lorien Pratt. Boston, MA: Springer US. ISBN   978-1-4615-5529-2. OCLC   840285889.
  6. L. Pratt (1993). Discriminability-based transfer between neural networks. In NIPS Conference: Advances in Neural Information Processing Systems 5 Morgan Kaufmann Publishers. pp. 204- 211
  7. Lorien Pratt and Mark Zangari: Overcoming the Decision Complexity Ceiling through Design. Quantellia white paper, December 2008
  8. Bornet, Pascal. "Council Post: The Value Of A Company Is The Sum Of Its Decisions". Forbes. Retrieved 2022-08-06.