Steve Omohundro

Last updated

Steve Omohundro
Steve Omohundro.jpg
Steve Omohundro (2010)
Born1959 (1959)
Education Stanford University
University of California, Berkeley
Scientific career
Fields Artificial Intelligence
Physics
Institutions University of Illinois at Urbana-Champaign
Possibility Research
Self-Aware Systems
Thesis Geometric Perturbation Theory and Plasma Physics  (1985)
Website steveomohundro.com

Stephen Malvern Omohundro (born 1959) is an American computer scientist [1] whose areas of research include Hamiltonian physics, dynamical systems, programming languages, machine learning, machine vision, and the social implications of artificial intelligence. His current work uses rational economics to develop safe and beneficial intelligent technologies for better collaborative modeling, understanding, innovation, and decision making.

Contents

Education

Omohundro has degrees in physics and mathematics from Stanford University (Phi Beta Kappa) [2] and a Ph.D. in physics from the University of California, Berkeley. [3]

Learning algorithms

Omohundro started the "Vision and Learning Group" at the University of Illinois, which produced 4 Masters and 2 Ph.D. theses. His work in learning algorithms included a number of efficient geometric algorithms, [4] [5] the manifold learning task and various algorithms for accomplishing this task, [6] other related visual learning and modelling tasks, [7] the best-first model merging approach to machine learning [8] (including the learning of Hidden Markov Models and Stochastic Context-free Grammars), [9] [10] [11] and the Family Discovery Learning Algorithm, which discovers the dimension and structure of a parameterized family of stochastic models. [12]

Self-improving artificial intelligence and AI safety

Omohundro started Self-Aware Systems in Palo Alto, California to research the technology and social implications of self-improving artificial intelligence. He is an advisor to the Machine Intelligence Research Institute on artificial intelligence. He argues that rational systems exhibit problematic natural "drives" that will need to be countered in order to build intelligent systems safely. [2] [13] His papers, talks, and videos on AI safety have generated extensive interest. [1] [14] [15] [16] He has given many talks on self-improving artificial intelligence, cooperative technology, AI safety, and connections with biological intelligence.

Programming languages

At Thinking Machines Corporation, Cliff Lasser and Steve Omohundro developed Star Lisp, the first programming language for the Connection Machine. Omohundro joined the International Computer Science Institute (ICSI) in Berkeley, California, where he led the development of the open source programming language Sather. [17] [18] Sather is featured in O'Reilly's History of Programming Languages poster. [19]

Physics and dynamical systems theory

Omohundro's book Geometric Perturbation Theory in Physics [2] [20] describes natural Hamiltonian symplectic structures for a wide range of physical models that arise from perturbation theory analyses.

He showed that there exist smooth partial differential equations which stably perform universal computation by simulating arbitrary cellular automata. [21] The asymptotic behavior of these PDEs is therefore logically undecidable.

With John David Crawford he showed that the orbits of three-dimensional period doubling systems can form an infinite number of topologically distinct torus knots and described the structure of their stable and unstable manifolds. [22]

Mathematica and Apple tablet contest

From 1986 to 1988, he was an Assistant Professor of Computer science at the University of Illinois at Urbana-Champaign and cofounded the Center for Complex Systems Research with Stephen Wolfram and Norman Packard. While at the University of Illinois, he worked with Stephen Wolfram and five others to create the symbolic mathematics program Mathematica. [2] He and Wolfram led a team of students that won an Apple Computer contest to design "The Computer of the Year 2000." Their design entry "Tablet" was a touchscreen tablet with GPS and other features that finally appeared when the Apple iPad was introduced 22 years later. [23] [24]

Other contributions

Subutai Ahmad and Steve Omohundro developed biologically realistic neural models of selective attention. [25] [26] [27] [28] As a research scientist at the NEC Research Institute, Omohundro worked on machine learning and computer vision, and was a co-inventor of U.S. Patent 5,696,964, "Multimedia Database Retrieval System Which Maintains a Posterior Probability Distribution that Each Item in the Database is a Target of a Search." [29] [30] [31] [32]

Pirate puzzle

Omohundro developed an extension to the game theoretic pirate puzzle featured in Scientific American. [33]

Outreach

Omohundro has sat on the Machine Intelligence Research Institute board of advisors. [34] He has written extensively on artificial intelligence, [35] and has warned that "an autonomous weapons arms race is already taking place" because "military and economic pressures are driving the rapid development of autonomous systems". [36] [37]

Related Research Articles

<span class="mw-page-title-main">Neural network (machine learning)</span> Computational model used in machine learning, based on connected, hierarchical functions

In machine learning, a neural network is a model inspired by the structure and function of biological neural networks in animal brains.

Reinforcement learning (RL) is an interdisciplinary area of machine learning and optimal control concerned with how an intelligent agent should take actions in a dynamic environment in order to maximize a reward signal. Reinforcement learning is one of the three basic machine learning paradigms, alongside supervised learning and unsupervised learning.

Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data, and thus perform tasks without explicit instructions. Advances in the field of deep learning have allowed neural networks to surpass many previous approaches in performance.

In artificial intelligence, symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search. Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as knowledge-based systems, symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems.

Bio-inspired computing, short for biologically inspired computing, is a field of study which seeks to solve computer science problems using models of biology. It relates to connectionism, social behavior, and emergence. Within computer science, bio-inspired computing relates to artificial intelligence and machine learning. Bio-inspired computing is a major subset of natural computation.

<span class="mw-page-title-main">Multi-armed bandit</span> Resource problem in machine learning

In probability theory and machine learning, the multi-armed bandit problem is a problem in which a decision maker iteratively selects one of multiple fixed choices when the properties of each choice are only partially known at the time of allocation, and may become better understood as time passes. A fundamental aspect of bandit problems is that choosing an arm does not affect the properties of the arm or other arms.

The following outline is provided as an overview of and topical guide to artificial intelligence:

Hierarchical temporal memory (HTM) is a biologically constrained machine intelligence technology developed by Numenta. Originally described in the 2004 book On Intelligence by Jeff Hawkins with Sandra Blakeslee, HTM is primarily used today for anomaly detection in streaming data. The technology is based on neuroscience and the physiology and interaction of pyramidal neurons in the neocortex of the mammalian brain.

In computer science, a predictive state representation (PSR) is a way to model a state of controlled dynamical system from a history of actions taken and resulting observations. PSR captures the state of a system as a vector of predictions for future tests (experiments) that can be done on the system. A test is a sequence of action-observation pairs and its prediction is the probability of the test's observation-sequence happening if the test's action-sequence were to be executed on the system. One of the advantage of using PSR is that the predictions are directly related to observable quantities. This is in contrast to other models of dynamical systems, such as partially observable Markov decision processes (POMDPs) where the state of the system is represented as a probability distribution over unobserved nominal states.

<span class="mw-page-title-main">Michael L. Littman</span> American computer scientist

Michael Lederman Littman is a computer scientist, researcher, educator, and author. His research interests focus on reinforcement learning. He is currently a University Professor of Computer Science at Brown University, where he has taught since 2012.

Statistical relational learning (SRL) is a subdiscipline of artificial intelligence and machine learning that is concerned with domain models that exhibit both uncertainty and complex, relational structure. Typically, the knowledge representation formalisms developed in SRL use first-order logic to describe relational properties of a domain in a general manner and draw upon probabilistic graphical models to model the uncertainty; some also build upon the methods of inductive logic programming. Significant contributions to the field have been made since the late 1990s.

<span class="mw-page-title-main">Stuart Geman</span> American mathematician

Stuart Alan Geman is an American mathematician, known for influential contributions to computer vision, statistics, probability theory, machine learning, and the neurosciences. He and his brother, Donald Geman, are well known for proposing the Gibbs sampler, and for the first proof of convergence of the simulated annealing algorithm.

David Hilton Wolpert is an American physicist and computer scientist. He is a professor at Santa Fe Institute. He is the author of three books, three patents, over one hundred refereed papers, and has received two awards. His name is particularly associated with a theorem in computer science known as "no free lunch".

<span class="mw-page-title-main">Eric Xing</span>

Eric Poe Xing is an American computer scientist whose research spans machine learning, computational biology, and statistical methodology. Xing is founding President of the world’s first artificial intelligence university, Mohamed bin Zayed University of Artificial Intelligence (MBZUAI).

This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence (AI), its subdisciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.

Alex Graves is a computer scientist and research scientist at DeepMind.

The following outline is provided as an overview of, and topical guide to, machine learning:

Mengdi Wang is a theoretical computer scientist who is a professor at Princeton University. Her research considers the fundamental theory that underpins reinforcement and machine learning. She was named one of MIT Technology Review's 35 Under 35 in 2018.

References

  1. 1 2 Rathi, Akshat (9 October 2015). "Stephen Hawking: Robots aren't just taking our jobs, they're making society more unequal". Quartz . Retrieved 6 January 2018.
  2. 1 2 3 4 Barrat, James (1 February 2014). "This is What Happens When You Teach Machines the Power of Natural Selection". The Daily Beast . Retrieved 7 January 2018.
  3. "Biography". Steve Omohundro. 10 August 2011. Retrieved 6 January 2018.
  4. Stephen M. Omohundro, “Geometric Learning Algorithms” Physica D, 42 (1990) 307-321
  5. Stephen M. Omohundro. Emergent Computation, edited by Stephanie Forrest, MIT Press (1991) 307-321
  6. Stephen M. Omohundro, "Fundamentals of Geometric Learning". University of Illinois at Urbana-Champaign, Department of Computer Science Technical Report UILU-ENG-88-1713 (February 1988).
  7. Chris Bregler, Stephen M. Omohundro, and Yochai Konig, “A Hybrid Approach to Bimodal Speech Recognition“, Proceedings of the 28th Annual Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, California, November 1994.
  8. Stephen M. Omohundro, “Best-First Model Merging for Dynamic Learning and Recognition” in Moody, J. E., Hanson, S. J., and Lippmann, R. P., (eds.) Advances in Neural Information Processing Systems 4, pp. 958-965, San Mateo, CA: Morgan Kaufmann Publishers, (1992).
  9. Andreas Stolcke and Stephen M. Omohundro, “Hidden Markov Model Induction by Bayesian Model Merging“, in Advances in Neural Information Processing Systems 5, ed. Steve J. Hanson and Jack D. Cowan, J. D. and C. Lee Giles, Morgan Kaufmann Publishers, Inc., San Mateo, California, 1993, pp. 11-18.
  10. Andreas Stolcke and Stephen M. Omohundro, “Best-first Model Merging for Hidden Markov Model Induction“, ICSI Technical ReportTR-94-003, January 1994.
  11. Andreas Stolcke and Stephen M. Omohundro, "Inducing Probabilistic Grammars by Bayesian Model Merging Archived 2016-03-10 at the Wayback Machine ", Proceedings of the International Colloquium on Grammatical Inference, Alicante, Spain, Lecture Notes in Artificial Intelligence 862, Springer-Verlag, Berlin, September 1994, pp. 106-118.
  12. Stephen M. Omohundro, “Family Discovery“, in Advances in Neural Information Processing Systems 8, eds. D. S. Touretzky, M. C. Mozer and M. E. Hasselmo, MIT Press, Cambridge, Massachusetts, 1996.
  13. Marcus, Gary (24 October 2013). "Why We Should Think About the Threat of Artificial Intelligence". The New Yorker. Retrieved 6 January 2018.
  14. "Is Skynet Inevitable?". Reason.com. 31 March 2014. Retrieved 7 January 2018.
  15. Stephen M. Omohundro, "The Nature of Self-Improving Artificial Intelligence" Singularity Summit 2007, San Francisco, CA
  16. Stephen M. Omohundro, "The Basic AI Drives", in the Proceedings of the First AGI Conference, Volume 171, Frontiers in Artificial Intelligence and Applications, edited by P. Wang, B. Goertzel, and S. Franklin, February 2008, IOS Press.
  17. Stephen M. Omohundro, "[Sather Provides Nonproprietary Access to Object-Oriented Programming]", Computers in Physics, Vol.6, No. 5, September, 1992, p. 444-449.
  18. Heinz Schmidt and Stephen M. Omohundro, "CLOS, Eiffel, and Sather: A Comparison", in Object-Oriented Programming: The CLOS Perspective, ed. Andreas Paepcke, MIT Press, Cambridge, Massachusetts, 1993, pp. 181-213.
  19. O'Reilly's History of Programming Languages poster
  20. Stephen M. Omohundro, Geometric Perturbation Theory in Physics, World Scientific Publishing Co. Pte. Ltd., Singapore (1986) 560 pages. ISBN   9971-5-0136-8
  21. Stephen M. Omohundro, "Modelling Cellular Automata with Partial Differential Equations", Physica D, 10D (1984) 128-134.
  22. John David Crawford and Stephen M. Omohundro, "On the Global Structure of Period Doubling Flows", Physica D, 12D (1984), pp. 161-180.
  23. Bartlett Mel, Stephen Omohundro, Arch Robison, Steven Skiena, Kurt Thearling, Luke Young, and Stephen Wolfram, “Tablet: Personal Computer in the Year 2000?, Communications of the ACM, 31:6 (1988) 638-646.
  24. Bartlett Mel, Stephen Omohundro, Arch Robison, Steven Skiena, Kurt Thearling, Luke Young, and Stephen Wolfram, “Academic Computing in the Year 2000?, Academic Computing, 2:7 (1988) 7-62.
  25. Subutai Ahmad and Stephen M. Omohundro, “Equilateral Triangles: A Challenge for Connectionist Vision“, Proceedings of the 12th Annual meeting of the Cognitive Science Society, MIT, (1990).
  26. Subutai Ahmad and Stephen M. Omohundro, “A Network for Extracting the Locations of Point Clusters Using Selective Attention“, ICSI Technical Report No. TR-90-011, (1990).
  27. Subutai Ahmad and Stephen M. Omohundro, “Efficient Visual Search: A Connectionist Solution“, Proceedings of the 13th Annual meeting of the Cognitive Science Society, Chicago, (1991).
  28. Bartlett Mel and Stephen M. Omohundro, “How Receptive Field Parameters Affect Neural Learning” in Advances in Neural Information Processing Systems 3, edited by Lippmann, Moody, and Touretzky, Morgan Kaufmann Publishers, Inc. (1991) 757-766.
  29. U.S. patent 5,696,964
  30. I. J. Cox, M. L. Miller, S. M. Omohundro, and P. N. Yianilos, "Target Testing and the PicHunter Bayesian Multimedia Retrieval System", in the Proceedings of the 3rd Forum on Research and Technology Advances in Digital Libraries, DL’96, 1996, pp. 66-75.
  31. U.S. Patent 5,696,964, “Multimedia Database Retrieval System Which Maintains a Posterior Probability Distribution That Each Item in the Database is a Target of a Search“, Ingemar J. Cox, Matthew L. Miller, Stephen M. Omohundro, and P. N. Yianilos, granted December 9, 1997, assigned to NEC Research Institute, Inc.
  32. T. P. Minka, M. L. Miller, I. J. Cox, P. N. Yianilos, S. M. Omohundro, “Toward Optimal Search of Image Databases“, in Proceedings of the International Conference on Computer Vision and Pattern Recognition, 1998.
  33. Ian Stewart, "A Puzzle for Pirates", Mathematical Recreations, Scientific American, May 1999, pp. 98-99
  34. Mark O'Connell (2017). To Be a Machine. Knopf Doubleday Publishing. ISBN   9780385540421.
  35. Neyfakh, Leon (1 March 2013). "Should we put robots on trial?". The Boston Globe . Retrieved 7 January 2018.
  36. Markoff, John (11 November 2014). "Fearing Bombs That Can Pick Whom to Kill". The New York Times . Retrieved 7 January 2018.
  37. "Inside the Pentagon's Effort to Build a Killer Robot". Time Magazine . 27 October 2015. Retrieved 7 January 2018.