Steve Omohundro | |
---|---|
Born | 1959 |
Education | Stanford University University of California, Berkeley |
Scientific career | |
Fields | Artificial Intelligence Physics |
Institutions | University of Illinois at Urbana-Champaign Possibility Research Self-Aware Systems |
Thesis | Geometric Perturbation Theory and Plasma Physics (1985) |
Website | steveomohundro.com |
Stephen Malvern Omohundro (born 1959) is an American computer scientist [1] whose areas of research include Hamiltonian physics, dynamical systems, programming languages, machine learning, machine vision, and the social implications of artificial intelligence. His current work uses rational economics to develop safe and beneficial intelligent technologies for better collaborative modeling, understanding, innovation, and decision making.
Omohundro has degrees in physics and mathematics from Stanford University (Phi Beta Kappa) [2] and a Ph.D. in physics from the University of California, Berkeley. [3]
Omohundro started the "Vision and Learning Group" at the University of Illinois, which produced 4 Masters and 2 Ph.D. theses. His work in learning algorithms included a number of efficient geometric algorithms, [4] [5] the manifold learning task and various algorithms for accomplishing this task, [6] other related visual learning and modelling tasks, [7] the best-first model merging approach to machine learning [8] (including the learning of Hidden Markov Models and Stochastic Context-free Grammars), [9] [10] [11] and the Family Discovery Learning Algorithm, which discovers the dimension and structure of a parameterized family of stochastic models. [12]
Omohundro started Self-Aware Systems in Palo Alto, California to research the technology and social implications of self-improving artificial intelligence. He is an advisor to the Machine Intelligence Research Institute on artificial intelligence. He argues that rational systems exhibit problematic natural "drives" that will need to be countered in order to build intelligent systems safely. [2] [13] His papers, talks, and videos on AI safety have generated extensive interest. [1] [14] [15] [16] He has given many talks on self-improving artificial intelligence, cooperative technology, AI safety, and connections with biological intelligence.
At Thinking Machines Corporation, Cliff Lasser and Steve Omohundro developed Star Lisp, the first programming language for the Connection Machine. Omohundro joined the International Computer Science Institute (ICSI) in Berkeley, California, where he led the development of the open source programming language Sather. [17] [18] Sather is featured in O'Reilly's History of Programming Languages poster. [19]
Omohundro's book Geometric Perturbation Theory in Physics [2] [20] describes natural Hamiltonian symplectic structures for a wide range of physical models that arise from perturbation theory analyses.
He showed that there exist smooth partial differential equations which stably perform universal computation by simulating arbitrary cellular automata. [21] The asymptotic behavior of these PDEs is therefore logically undecidable.
With John David Crawford he showed that the orbits of three-dimensional period doubling systems can form an infinite number of topologically distinct torus knots and described the structure of their stable and unstable manifolds. [22]
From 1986 to 1988, he was an Assistant Professor of Computer science at the University of Illinois at Urbana-Champaign and cofounded the Center for Complex Systems Research with Stephen Wolfram and Norman Packard. While at the University of Illinois, he worked with Stephen Wolfram and five others to create the symbolic mathematics program Mathematica. [2] He and Wolfram led a team of students that won an Apple Computer contest to design "The Computer of the Year 2000." Their design entry "Tablet" was a touchscreen tablet with GPS and other features that finally appeared when the Apple iPad was introduced 22 years later. [23] [24]
Subutai Ahmad and Steve Omohundro developed biologically realistic neural models of selective attention. [25] [26] [27] [28] As a research scientist at the NEC Research Institute, Omohundro worked on machine learning and computer vision, and was a co-inventor of U.S. Patent 5,696,964, "Multimedia Database Retrieval System Which Maintains a Posterior Probability Distribution that Each Item in the Database is a Target of a Search." [29] [30] [31] [32]
Omohundro developed an extension to the game theoretic pirate puzzle featured in Scientific American. [33]
Omohundro has sat on the Machine Intelligence Research Institute board of advisors. [34] He has written extensively on artificial intelligence, [35] and has warned that "an autonomous weapons arms race is already taking place" because "military and economic pressures are driving the rapid development of autonomous systems". [36] [37]
In machine learning, a neural network is a model inspired by the structure and function of biological neural networks in animal brains.
Reinforcement learning (RL) is an interdisciplinary area of machine learning and optimal control concerned with how an intelligent agent should take actions in a dynamic environment in order to maximize a reward signal. Reinforcement learning is one of the three basic machine learning paradigms, alongside supervised learning and unsupervised learning.
Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data, and thus perform tasks without explicit instructions. Advances in the field of deep learning have allowed neural networks to surpass many previous approaches in performance.
In artificial intelligence, symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search. Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as knowledge-based systems, symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems.
Bio-inspired computing, short for biologically inspired computing, is a field of study which seeks to solve computer science problems using models of biology. It relates to connectionism, social behavior, and emergence. Within computer science, bio-inspired computing relates to artificial intelligence and machine learning. Bio-inspired computing is a major subset of natural computation.
In probability theory and machine learning, the multi-armed bandit problem is a problem in which a decision maker iteratively selects one of multiple fixed choices when the properties of each choice are only partially known at the time of allocation, and may become better understood as time passes. A fundamental aspect of bandit problems is that choosing an arm does not affect the properties of the arm or other arms.
The following outline is provided as an overview of and topical guide to artificial intelligence:
Hierarchical temporal memory (HTM) is a biologically constrained machine intelligence technology developed by Numenta. Originally described in the 2004 book On Intelligence by Jeff Hawkins with Sandra Blakeslee, HTM is primarily used today for anomaly detection in streaming data. The technology is based on neuroscience and the physiology and interaction of pyramidal neurons in the neocortex of the mammalian brain.
In computer science, a predictive state representation (PSR) is a way to model a state of controlled dynamical system from a history of actions taken and resulting observations. PSR captures the state of a system as a vector of predictions for future tests (experiments) that can be done on the system. A test is a sequence of action-observation pairs and its prediction is the probability of the test's observation-sequence happening if the test's action-sequence were to be executed on the system. One of the advantage of using PSR is that the predictions are directly related to observable quantities. This is in contrast to other models of dynamical systems, such as partially observable Markov decision processes (POMDPs) where the state of the system is represented as a probability distribution over unobserved nominal states.
Michael Lederman Littman is a computer scientist, researcher, educator, and author. His research interests focus on reinforcement learning. He is currently a University Professor of Computer Science at Brown University, where he has taught since 2012.
Statistical relational learning (SRL) is a subdiscipline of artificial intelligence and machine learning that is concerned with domain models that exhibit both uncertainty and complex, relational structure. Typically, the knowledge representation formalisms developed in SRL use first-order logic to describe relational properties of a domain in a general manner and draw upon probabilistic graphical models to model the uncertainty; some also build upon the methods of inductive logic programming. Significant contributions to the field have been made since the late 1990s.
Stuart Alan Geman is an American mathematician, known for influential contributions to computer vision, statistics, probability theory, machine learning, and the neurosciences. He and his brother, Donald Geman, are well known for proposing the Gibbs sampler, and for the first proof of convergence of the simulated annealing algorithm.
David Hilton Wolpert is an American physicist and computer scientist. He is a professor at Santa Fe Institute. He is the author of three books, three patents, over one hundred refereed papers, and has received two awards. His name is particularly associated with a theorem in computer science known as "no free lunch".
Eric Poe Xing is an American computer scientist whose research spans machine learning, computational biology, and statistical methodology. Xing is founding President of the world’s first artificial intelligence university, Mohamed bin Zayed University of Artificial Intelligence (MBZUAI).
This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence (AI), its subdisciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.
Alex Graves is a computer scientist and research scientist at DeepMind.
The following outline is provided as an overview of, and topical guide to, machine learning:
Mengdi Wang is a theoretical computer scientist who is a professor at Princeton University. Her research considers the fundamental theory that underpins reinforcement and machine learning. She was named one of MIT Technology Review's 35 Under 35 in 2018.