Steve Omohundro

Last updated
Steve Omohundro
Steve Omohundro.jpg
Steve Omohundro (2010)
Born1959 (1959)
Education Stanford University
University of California, Berkeley
Scientific career
Fields Artificial Intelligence
Physics
Institutions University of Illinois at Urbana-Champaign
Possibility Research
Self-Aware Systems
Thesis Geometric Perturbation Theory and Plasma Physics  (1985)

Stephen Malvern Omohundro (born 1959) is an American computer scientist [1] whose areas of research include Hamiltonian physics, dynamical systems, programming languages, machine learning, machine vision, and the social implications of artificial intelligence. His current work uses rational economics to develop safe and beneficial intelligent technologies for better collaborative modeling, understanding, innovation, and decision making.

Contents

Education

Omohundro has degrees in physics and mathematics from Stanford University (Phi Beta Kappa) [2] and a Ph.D. in physics from the University of California, Berkeley. [3]

Learning algorithms

Omohundro started the "Vision and Learning Group" at the University of Illinois, which produced 4 Masters and 2 Ph.D. theses. His work in learning algorithms included a number of efficient geometric algorithms, [4] [5] the manifold learning task and various algorithms for accomplishing this task, [6] other related visual learning and modelling tasks, [7] the best-first model merging approach to machine learning [8] (including the learning of Hidden Markov Models and Stochastic Context-free Grammars), [9] [10] [11] and the Family Discovery Learning Algorithm, which discovers the dimension and structure of a parameterized family of stochastic models. [12]

Self-improving artificial intelligence and AI safety

Omohundro started Self-Aware Systems in Palo Alto, California to research the technology and social implications of self-improving artificial intelligence. He is an advisor to the Machine Intelligence Research Institute on artificial intelligence. He argues that rational systems exhibit problematic natural "drives" that will need to be countered in order to build intelligent systems safely. [2] [13] His papers, talks, and videos on AI safety have generated extensive interest. [1] [14] [15] [16] He has given many talks on self-improving artificial intelligence, cooperative technology, AI safety, and connections with biological intelligence.

Programming languages

At Thinking Machines Corporation, Cliff Lasser and Steve Omohundro developed Star Lisp, the first programming language for the Connection Machine. Omohundro joined the International Computer Science Institute (ICSI) in Berkeley, California, where he led the development of the open source programming language Sather. [17] [18] Sather is featured in O'Reilly's History of Programming Languages poster. [19]

Physics and dynamical systems theory

Omohundro's book Geometric Perturbation Theory in Physics [2] [20] describes natural Hamiltonian symplectic structures for a wide range of physical models that arise from perturbation theory analyses.

He showed that there exist smooth partial differential equations which stably perform universal computation by simulating arbitrary cellular automata. [21] The asymptotic behavior of these PDEs is therefore logically undecidable.

With John David Crawford he showed that the orbits of three-dimensional period doubling systems can form an infinite number of topologically distinct torus knots and described the structure of their stable and unstable manifolds. [22]

Mathematica and Apple tablet contest

From 1986 to 1988, he was an Assistant Professor of Computer science at the University of Illinois at Urbana-Champaign and cofounded the Center for Complex Systems Research with Stephen Wolfram and Norman Packard. While at the University of Illinois, he worked with Stephen Wolfram and five others to create the symbolic mathematics program Mathematica. [2] He and Wolfram led a team of students that won an Apple Computer contest to design "The Computer of the Year 2000." Their design entry "Tablet" was a touchscreen tablet with GPS and other features that finally appeared when the Apple iPad was introduced 22 years later. [23] [24]

Other contributions

Subutai Ahmad and Steve Omohundro developed biologically realistic neural models of selective attention. [25] [26] [27] [28] As a research scientist at the NEC Research Institute, Omohundro worked on machine learning and computer vision, and was a co-inventor of U.S. Patent 5,696,964, "Multimedia Database Retrieval System Which Maintains a Posterior Probability Distribution that Each Item in the Database is a Target of a Search." [29] [30] [31] [32]

Pirate puzzle

Omohundro developed an extension to the game theoretic pirate puzzle featured in Scientific American. [33]

Outreach

Omohundro has sat on the Machine Intelligence Research Institute board of advisors. [34] He has written extensively on artificial intelligence, [35] and has warned that "an autonomous weapons arms race is already taking place" because "military and economic pressures are driving the rapid development of autonomous systems". [36] [37]

Related Research Articles

<span class="mw-page-title-main">Artificial neural network</span> Computational model used in machine learning, based on connected, hierarchical functions

Artificial neural networks are a branch of machine learning models that are built using principles of neuronal organization discovered by connectionism in the biological neural networks constituting animal brains.

<span class="mw-page-title-main">Reinforcement learning</span> Field of machine learning

Reinforcement learning (RL) is an interdisciplinary area of machine learning and optimal control concerned with how an intelligent agent ought to take actions in a dynamic environment in order to maximize the cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning.

<span class="mw-page-title-main">Machine learning</span> Study of algorithms that improve automatically through experience

Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can effectively generalize and thus perform tasks without explicit instructions. Recently, generative artificial neural networks have been able to surpass many previous approaches in performance. Machine learning approaches have been applied to large language models, computer vision, speech recognition, email filtering, agriculture and medicine, where it is too costly to develop algorithms to perform the needed tasks.

<span class="mw-page-title-main">Symbolic artificial intelligence</span> Methods in artificial intelligence research

In artificial intelligence, symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search. Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as knowledge-based systems, symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems.

Bio-inspired computing, short for biologically inspired computing, is a field of study which seeks to solve computer science problems using models of biology. It relates to connectionism, social behavior, and emergence. Within computer science, bio-inspired computing relates to artificial intelligence and machine learning. Bio-inspired computing is a major subset of natural computation.

<span class="mw-page-title-main">Bernhard Schölkopf</span> German computer scientist

Bernhard Schölkopf is a German computer scientist known for his work in machine learning, especially on kernel methods and causality. He is a director at the Max Planck Institute for Intelligent Systems in Tübingen, Germany, where he heads the Department of Empirical Inference. He is also an affiliated professor at ETH Zürich, honorary professor at the University of Tübingen and the Technical University Berlin, and chairman of the European Laboratory for Learning and Intelligent Systems (ELLIS).

<span class="mw-page-title-main">Neural network</span> Structure in biology and artificial intelligence

A neural network is a neural circuit of biological neurons, sometimes also called a biological neural network, or a network of artificial neurons or nodes in the case of an artificial neural network.

<span class="mw-page-title-main">Multi-armed bandit</span> Machine Learning

In probability theory and machine learning, the multi-armed bandit problem is a problem in which a fixed limited set of resources must be allocated between competing (alternative) choices in a way that maximizes their expected gain, when each choice's properties are only partially known at the time of allocation, and may become better understood as time passes or by allocating resources to the choice. This is a classic reinforcement learning problem that exemplifies the exploration–exploitation tradeoff dilemma. The name comes from imagining a gambler at a row of slot machines, who has to decide which machines to play, how many times to play each machine and in which order to play them, and whether to continue with the current machine or try a different machine. The multi-armed bandit problem also falls into the broad category of stochastic scheduling.

Vasant G. Honavar is an Indian born American computer scientist, and artificial intelligence, machine learning, big data, data science, causal inference, knowledge representation, bioinformatics and health informatics researcher and professor.

Hierarchical temporal memory (HTM) is a biologically constrained machine intelligence technology developed by Numenta. Originally described in the 2004 book On Intelligence by Jeff Hawkins with Sandra Blakeslee, HTM is primarily used today for anomaly detection in streaming data. The technology is based on neuroscience and the physiology and interaction of pyramidal neurons in the neocortex of the mammalian brain.

In computer science, a predictive state representation (PSR) is a way to model a state of controlled dynamical system from a history of actions taken and resulting observations. PSR captures the state of a system as a vector of predictions for future tests (experiments) that can be done on the system. A test is a sequence of action-observation pairs and its prediction is the probability of the test's observation-sequence happening if the test's action-sequence were to be executed on the system. One of the advantage of using PSR is that the predictions are directly related to observable quantities. This is in contrast to other models of dynamical systems, such as partially observable Markov decision processes (POMDPs) where the state of the system is represented as a probability distribution over unobserved nominal states.

<span class="mw-page-title-main">Michael L. Littman</span> American computer scientist

Michael Lederman Littman is a computer scientist, researcher, educator, and author. His research interests focus on reinforcement learning. He is currently a University Professor of Computer Science at Brown University, where he has taught since 2012.

<span class="mw-page-title-main">Stuart Geman</span> American mathematician

Stuart Alan Geman is an American mathematician, known for influential contributions to computer vision, statistics, probability theory, machine learning, and the neurosciences. He and his brother, Donald Geman, are well known for proposing the Gibbs sampler, and for the first proof of convergence of the simulated annealing algorithm.

<span class="mw-page-title-main">Eric Xing</span>

Eric Poe Xing is an American computer scientist whose research spans machine learning, computational biology, and statistical methodology. Xing is founding President of the world’s first artificial intelligence university, Mohamed bin Zayed University of Artificial Intelligence (MBZUAI).

<span class="mw-page-title-main">Glossary of artificial intelligence</span> List of definitions of terms and concepts commonly used in the study of artificial intelligence

This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence, its sub-disciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.

<span class="mw-page-title-main">Outline of machine learning</span> Overview of and topical guide to machine learning

The following outline is provided as an overview of and topical guide to machine learning:

Yee-Whye Teh is a professor of statistical machine learning in the Department of Statistics, University of Oxford. Prior to 2012 he was a reader at the Gatsby Charitable Foundation computational neuroscience unit at University College London. His work is primarily in machine learning, artificial intelligence, statistics and computer science.

<span class="mw-page-title-main">Lyle Norman Long</span> Academic and computational scientist

Lyle Norman Long is an academic, and computational scientist. He is a Professor Emeritus of Computational Science, Mathematics, and Engineering at The Pennsylvania State University, and is most known for developing algorithms and software for mathematical models, including neural networks, and robotics. His research has been focused in the fields of computational science, computational neuroscience, cognitive robotics, parallel computing, and software engineering.

References

  1. 1 2 Rathi, Akshat (9 October 2015). "Stephen Hawking: Robots aren't just taking our jobs, they're making society more unequal". Quartz . Retrieved 6 January 2018.
  2. 1 2 3 4 Barrat, James (1 February 2014). "This is What Happens When You Teach Machines the Power of Natural Selection". The Daily Beast. Retrieved 7 January 2018.
  3. "Biography". Steve Omohundro. 10 August 2011. Retrieved 6 January 2018.
  4. Stephen M. Omohundro, “Geometric Learning Algorithms” Physica D, 42 (1990) 307-321
  5. Stephen M. Omohundro. Emergent Computation, edited by Stephanie Forrest, MIT Press (1991) 307-321
  6. Stephen M. Omohundro, "Fundamentals of Geometric Learning". University of Illinois at Urbana-Champaign, Department of Computer Science Technical Report UILU-ENG-88-1713 (February 1988).
  7. Chris Bregler, Stephen M. Omohundro, and Yochai Konig, “A Hybrid Approach to Bimodal Speech Recognition“, Proceedings of the 28th Annual Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, California, November 1994.
  8. Stephen M. Omohundro, “Best-First Model Merging for Dynamic Learning and Recognition” in Moody, J. E., Hanson, S. J., and Lippmann, R. P., (eds.) Advances in Neural Information Processing Systems 4, pp. 958-965, San Mateo, CA: Morgan Kaufmann Publishers, (1992).
  9. Andreas Stolcke and Stephen M. Omohundro, “Hidden Markov Model Induction by Bayesian Model Merging“, in Advances in Neural Information Processing Systems 5, ed. Steve J. Hanson and Jack D. Cowan, J. D. and C. Lee Giles, Morgan Kaufmann Publishers, Inc., San Mateo, California, 1993, pp. 11-18.
  10. Andreas Stolcke and Stephen M. Omohundro, “Best-first Model Merging for Hidden Markov Model Induction“, ICSI Technical ReportTR-94-003, January 1994.
  11. Andreas Stolcke and Stephen M. Omohundro, "Inducing Probabilistic Grammars by Bayesian Model Merging Archived 2016-03-10 at the Wayback Machine ", Proceedings of the International Colloquium on Grammatical Inference, Alicante, Spain, Lecture Notes in Artificial Intelligence 862, Springer-Verlag, Berlin, September 1994, pp. 106-118.
  12. Stephen M. Omohundro, “Family Discovery“, in Advances in Neural Information Processing Systems 8, eds. D. S. Touretzky, M. C. Mozer and M. E. Hasselmo, MIT Press, Cambridge, Massachusetts, 1996.
  13. Marcus, Gary (24 October 2013). "Why We Should Think About the Threat of Artificial Intelligence". The New Yorker. Retrieved 6 January 2018.
  14. "Is Skynet Inevitable?". Reason.com. 31 March 2014. Retrieved 7 January 2018.
  15. Stephen M. Omohundro, "The Nature of Self-Improving Artificial Intelligence" Singularity Summit 2007, San Francisco, CA
  16. Stephen M. Omohundro, "The Basic AI Drives", in the Proceedings of the First AGI Conference, Volume 171, Frontiers in Artificial Intelligence and Applications, edited by P. Wang, B. Goertzel, and S. Franklin, February 2008, IOS Press.
  17. Stephen M. Omohundro, "[Sather Provides Nonproprietary Access to Object-Oriented Programming]", Computers in Physics, Vol.6, No. 5, September, 1992, p. 444-449.
  18. Heinz Schmidt and Stephen M. Omohundro, "CLOS, Eiffel, and Sather: A Comparison", in Object-Oriented Programming: The CLOS Perspective, ed. Andreas Paepcke, MIT Press, Cambridge, Massachusetts, 1993, pp. 181-213.
  19. O'Reilly's History of Programming Languages poster
  20. Stephen M. Omohundro, Geometric Perturbation Theory in Physics, World Scientific Publishing Co. Pte. Ltd., Singapore (1986) 560 pages. ISBN   9971-5-0136-8
  21. Stephen M. Omohundro, "Modelling Cellular Automata with Partial Differential Equations", Physica D, 10D (1984) 128-134.
  22. John David Crawford and Stephen M. Omohundro, "On the Global Structure of Period Doubling Flows", Physica D, 12D (1984), pp. 161-180.
  23. Bartlett Mel, Stephen Omohundro, Arch Robison, Steven Skiena, Kurt Thearling, Luke Young, and Stephen Wolfram, “Tablet: Personal Computer in the Year 2000?, Communications of the ACM, 31:6 (1988) 638-646.
  24. Bartlett Mel, Stephen Omohundro, Arch Robison, Steven Skiena, Kurt Thearling, Luke Young, and Stephen Wolfram, “Academic Computing in the Year 2000?, Academic Computing, 2:7 (1988) 7-62.
  25. Subutai Ahmad and Stephen M. Omohundro, “Equilateral Triangles: A Challenge for Connectionist Vision“, Proceedings of the 12th Annual meeting of the Cognitive Science Society, MIT, (1990).
  26. Subutai Ahmad and Stephen M. Omohundro, “A Network for Extracting the Locations of Point Clusters Using Selective Attention“, ICSI Technical Report No. TR-90-011, (1990).
  27. Subutai Ahmad and Stephen M. Omohundro, “Efficient Visual Search: A Connectionist Solution“, Proceedings of the 13th Annual meeting of the Cognitive Science Society, Chicago, (1991).
  28. Bartlett Mel and Stephen M. Omohundro, “How Receptive Field Parameters Affect Neural Learning” in Advances in Neural Information Processing Systems 3, edited by Lippmann, Moody, and Touretzky, Morgan Kaufmann Publishers, Inc. (1991) 757-766.
  29. U.S. Patent 5,696,964
  30. I. J. Cox, M. L. Miller, S. M. Omohundro, and P. N. Yianilos, "Target Testing and the PicHunter Bayesian Multimedia Retrieval System", in the Proceedings of the 3rd Forum on Research and Technology Advances in Digital Libraries, DL’96, 1996, pp. 66-75.
  31. U.S. Patent 5,696,964, “Multimedia Database Retrieval System Which Maintains a Posterior Probability Distribution That Each Item in the Database is a Target of a Search“, Ingemar J. Cox, Matthew L. Miller, Stephen M. Omohundro, and P. N. Yianilos, granted December 9, 1997, assigned to NEC Research Institute, Inc.
  32. T. P. Minka, M. L. Miller, I. J. Cox, P. N. Yianilos, S. M. Omohundro, “Toward Optimal Search of Image Databases“, in Proceedings of the International Conference on Computer Vision and Pattern Recognition, 1998.
  33. Ian Stewart, "A Puzzle for Pirates", Mathematical Recreations, Scientific American, May 1999, pp. 98-99
  34. Mark O'Connell (2017). To Be a Machine. Knopf Doubleday Publishing. ISBN   9780385540421.
  35. Neyfakh, Leon (1 March 2013). "Should we put robots on trial?". The Boston Globe . Retrieved 7 January 2018.
  36. Markoff, John (11 November 2014). "Fearing Bombs That Can Pick Whom to Kill". The New York Times. Retrieved 7 January 2018.
  37. "Inside the Pentagon's Effort to Build a Killer Robot". Time Magazine. 27 October 2015. Retrieved 7 January 2018.