Hava Siegelmann

Last updated
Hava Siegelmann
Born (1964-08-23) August 23, 1964 (age 59)
Alma mater Rutgers University
Known for Hypercomputation
Awards Meritorious Public Service Medal
Scientific career
Fieldscomputer science, neuroscience, system biology, biomedical engineering
Institutions University of Massachusetts Amherst
Thesis Foundations of Recurrent Neural Networks  (1993)
Doctoral advisor Eduardo Daniel Sontag

Hava Siegelmann is an American computer scientist and Provost Professor at the University of Massachusetts Amherst. [1]

Contents

Biography

Siegelmann earned her Ph.D. in Computer Science at Rutgers University (1993) under Eduardo Sontag. Her dissertation was on the topic of Hypercomputation. [2] She earned an M.Sc. in Computer Science at Hebrew University (1992) and a B.A. in Computer Science at the Technion (1988).

Siegelmann was a program manager of several DARPA AI programs including Lifelong Learning Machines, [3] Guaranteeing AI Robustness Against Deception, [4] , and Cooperative Secure Learning. [5] DARPA/DoD awarded her with the Meritorious Public Service Medal for her research and leadership. [6]

Selected publications

Related Research Articles

<span class="mw-page-title-main">Artificial intelligence</span> Intelligence of machines or software

Artificial intelligence (AI) is the intelligence of machines or software, as opposed to the intelligence of humans or animals. It is also the field of study in computer science that develops and studies intelligent machines. "AI" may also refer to the machines themselves.

Hypercomputation or super-Turing computation is a set of models of computation that can provide outputs that are not Turing-computable. For example, a machine that could solve the halting problem would be a hypercomputer; so too would one that can correctly evaluate every statement in Peano arithmetic.

Vladimir Naumovich Vapnik is a computer scientist, researcher, and academic. He is one of the main developers of the Vapnik–Chervonenkis theory of statistical learning and the co-inventor of the support-vector machine method and support-vector clustering algorithms.

<span class="mw-page-title-main">Real computation</span> Concept in computability theory

In computability theory, the theory of real computation deals with hypothetical computing machines using infinite-precision real numbers. They are given this name because they operate on the set of real numbers. Within this theory, it is possible to prove interesting statements such as "The complement of the Mandelbrot set is only partially decidable."

<span class="mw-page-title-main">Jürgen Schmidhuber</span> German computer scientist

Jürgen Schmidhuber is a German computer scientist noted for his work in the field of artificial intelligence, specifically artificial neural networks. He is a scientific director of the Dalle Molle Institute for Artificial Intelligence Research in Switzerland. He is also director of the Artificial Intelligence Initiative and professor of the Computer Science program in the Computer, Electrical, and Mathematical Sciences and Engineering (CEMSE) division at the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia.

<span class="mw-page-title-main">Geoffrey Hinton</span> British-Canadian computer scientist and psychologist (born 1947)

Geoffrey Everest Hinton is a British-Canadian cognitive psychologist and computer scientist, most noted for his work on artificial neural networks. From 2013 to 2023, he divided his time working for Google and the University of Toronto, before publicly announcing his departure from Google in May 2023 citing concerns about the risks of artificial intelligence (AI) technology. In 2017, he co-founded and became the chief scientific advisor of the Vector Institute in Toronto.

Developmental robotics (DevRob), sometimes called epigenetic robotics, is a scientific field which aims at studying the developmental mechanisms, architectures and constraints that allow lifelong and open-ended learning of new skills and new knowledge in embodied machines. As in human children, learning is expected to be cumulative and of progressively increasing complexity, and to result from self-exploration of the world in combination with social interaction. The typical methodological approach consists in starting from theories of human and animal development elaborated in fields such as developmental psychology, neuroscience, developmental and evolutionary biology, and linguistics, then to formalize and implement them in robots, sometimes exploring extensions or variants of them. The experimentation of those models in robots allows researchers to confront them with reality, and as a consequence, developmental robotics also provides feedback and novel hypotheses on theories of human and animal development.

<span class="mw-page-title-main">Neural network</span> Structure in biology and artificial intelligence

A neural network can refer to either a neural circuit of biological neurons, or a network of artificial neurons or nodes in the case of an artificial neural network. Artificial neural networks are used for solving artificial intelligence (AI) problems; they model connections of biological neurons as weights between nodes. A positive weight reflects an excitatory connection, while negative values mean inhibitory connections. All inputs are modified by a weight and summed. This activity is referred to as a linear combination. Finally, an activation function controls the amplitude of the output. For example, an acceptable range of output is usually between 0 and 1, or it could be −1 and 1.

<span class="mw-page-title-main">History of artificial intelligence</span>

The history of artificial intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen. The seeds of modern AI were planted by philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.

<span class="mw-page-title-main">AI winter</span> Period of reduced funding and interest in AI research

In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research. The field has experienced several hype cycles, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or even decades later.

<span class="mw-page-title-main">Eduardo D. Sontag</span> Argentine American mathematician

Eduardo Daniel Sontag is an Argentine-American mathematician, and distinguished university professor at Northeastern University, who works in the fields control theory, dynamical systems, systems molecular biology, cancer and immunology, theoretical computer science, neural networks, and computational biology.

<span class="mw-page-title-main">Richard S. Sutton</span> Canadian computer scientist

Richard S. Sutton is a Canadian computer scientist. He is a distinguished research scientist at DeepMind and a professor of computing science at the University of Alberta. Sutton is considered one of the founders of modern computational reinforcement learning, having several significant contributions to the field, including temporal difference learning and policy gradient methods.

In computability theory, super-recursive algorithms are a generalization of ordinary algorithms that are more powerful, that is, compute more than Turing machines. The term was introduced by Mark Burgin, whose book "Super-recursive algorithms" develops their theory and presents several mathematical models. Turing machines and other mathematical models of conventional algorithms allow researchers to find properties of recursive algorithms and their computations. In a similar way, mathematical models of super-recursive algorithms, such as inductive Turing machines, allow researchers to find properties of super-recursive algorithms and their computations.

<span class="mw-page-title-main">Yann LeCun</span> French computer scientist (born 1960)

Yann André LeCun is a Turing Award winning French computer scientist working primarily in the fields of machine learning, computer vision, mobile robotics and computational neuroscience. He is the Silver Professor of the Courant Institute of Mathematical Sciences at New York University and Vice-President, Chief AI Scientist at Meta.

Robert M. French is a research director at the French National Centre for Scientific Research. He is currently at the University of Burgundy in Dijon. He holds a Ph.D. from the University of Michigan, where he worked with Douglas Hofstadter on the Tabletop computational cognitive model. He specializes in cognitive science and has made an extensive study of the process of analogy-making.

A cognitive computer is a computer that hardwires artificial intelligence and machine-learning algorithms into an integrated circuit that closely reproduces the behavior of the human brain. It generally adopts a neuromorphic engineering approach. Synonyms are neuromorphic chip and cognitive chip.

<span class="mw-page-title-main">Yoshua Bengio</span> Canadian computer scientist

Yoshua Bengio is a Canadian computer scientist, most noted for his work on artificial neural networks and deep learning. He is a professor at the Department of Computer Science and Operations Research at the Université de Montréal and scientific director of the Montreal Institute for Learning Algorithms (MILA).

<span class="mw-page-title-main">Glossary of artificial intelligence</span> List of definitions of terms and concepts commonly used in the study of artificial intelligence

This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence, its sub-disciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.

This page is a timeline of machine learning. Major discoveries, achievements, milestones and other major events in machine learning are included.

<span class="mw-page-title-main">Differentiable neural computer</span> Artificial neural network architecture

In artificial intelligence, a differentiable neural computer (DNC) is a memory augmented neural network architecture (MANN), which is typically recurrent in its implementation. The model was published in 2016 by Alex Graves et al. of DeepMind.

References

  1. "Hava T. Siegelmann". Manning College of Information & Computer Sciences. University of Massachusetts Amherst. 20 February 2008. Retrieved 2023-08-05.
  2. Siegelman, Hava (1993). Foundations of Recurrent Neural Networks (PhD thesis). Rutgers University.
  3. "Lifelong Learning Machines (L2M) (Archived)". www.darpa.mil. Retrieved 2023-09-23.
  4. "Guaranteeing AI Robustness Against Deception (GARD)". www.darpa.mil. Retrieved 2023-09-23.
  5. "Cooperative Secure Learning (CSL) (Archived)". www.darpa.mil. Retrieved 2023-09-23.
  6. "DARPA Recognizes UMass Professor Hava Siegelmann for Major Advances in AI" (Press release).