Albert Uttley

Last updated

Albert Maurel Uttley (14 August, 1906, London - 13 September, 1985 Bexhill) [1] was an English scientist involved in computing, cybernetics, neurophysiology and psychology. He was a member of the Ratio Club and was the person who suggested its name. [2]

He was designing conditional-probability neural nets for pattern recognition for the British military. [3] He showed that neural networks with Hebbian learning rules could learn to classify binary sequences. [4]

Albert was the son of George and Ethel Uttley. He married Gwendoline Lucy Richens. [1]

Publications

Related Research Articles

<span class="mw-page-title-main">Artificial neural network</span> Computational model used in machine learning, based on connected, hierarchical functions

Artificial neural networks are a branch of machine learning models that are built using principles of neuronal organization discovered by connectionism in the biological neural networks constituting animal brains.

Warren Sturgis McCulloch was an American neurophysiologist and cybernetician, known for his work on the foundation for certain brain theories and his contribution to the cybernetics movement. Along with Walter Pitts, McCulloch created computational models based on mathematical algorithms called threshold logic which split the inquiry into two distinct approaches, one approach focused on biological processes in the brain and the other focused on the application of neural networks to artificial intelligence.

Fuzzy logic is a form of many-valued logic in which the truth value of variables may be any real number between 0 and 1. It is employed to handle the concept of partial truth, where the truth value may range between completely true and completely false. By contrast, in Boolean logic, the truth values of variables may only be the integer values 0 or 1.

A Bayesian network is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). Bayesian networks are ideal for taking an event that occurred and predicting the likelihood that any one of several possible known causes was the contributing factor. For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.

<span class="mw-page-title-main">Machine learning</span> Study of algorithms that improve automatically through experience

Machine learning (ML) is an umbrella term for solving problems for which development of algorithms by human programmers would be cost-prohibitive, and instead the problems are solved by helping machines 'discover' their 'own' algorithms, without needing to be explicitly told what to do by any human-developed algorithms. When there was a vast amount of potential answers, the correct ones needed to be labeled as valid by human labelers initially and human supervision was needed.

Ray Solomonoff was the inventor of algorithmic probability, his General Theory of Inductive Inference, and was a founder of algorithmic information theory. He was an originator of the branch of artificial intelligence based on machine learning, prediction and probability. He circulated the first report on non-semantic machine learning in 1956.

The Ratio Club was a small British informal dining club from 1949 to 1958 of young psychiatrists, psychologists, physiologists, mathematicians and engineers who met to discuss issues in cybernetics.

<span class="mw-page-title-main">Recurrent neural network</span> Computational model used in machine learning

A recurrent neural network (RNN) is a class of artificial neural networks where connections between nodes can create a cycle, allowing output from some nodes to affect subsequent input to the same nodes. This allows it to exhibit temporal dynamic behavior. Derived from feedforward neural networks, RNNs can use their internal state (memory) to process variable length sequences of inputs. This makes them applicable to tasks such as unsegmented, connected handwriting recognition or speech recognition. Recurrent neural networks are theoretically Turing complete and can run arbitrary programs to process arbitrary sequences of inputs.

<span class="mw-page-title-main">Neural network</span> Structure in biology and artificial intelligence

A neural network can refer to either a neural circuit of biological neurons, or a network of artificial neurons or nodes in the case of an artificial neural network. Artificial neural networks are used for solving artificial intelligence (AI) problems; they model connections of biological neurons as weights between nodes. A positive weight reflects an excitatory connection, while negative values mean inhibitory connections. All inputs are modified by a weight and summed. This activity is referred to as a linear combination. Finally, an activation function controls the amplitude of the output. For example, an acceptable range of output is usually between 0 and 1, or it could be −1 and 1.

<span class="mw-page-title-main">Quantum neural network</span> Quantum Mechanics in Neural Networks

Quantum neural networks are computational neural network models which are based on the principles of quantum mechanics. The first ideas on quantum neural computation were published independently in 1995 by Subhash Kak and Ron Chrisley, engaging with the theory of quantum mind, which posits that quantum effects play a role in cognitive function. However, typical research in quantum neural networks involves combining classical artificial neural network models with the advantages of quantum information in order to develop more efficient algorithms. One important motivation for these investigations is the difficulty to train classical neural networks, especially in big data applications. The hope is that features of quantum computing such as quantum parallelism or the effects of interference and entanglement can be used as resources. Since the technological implementation of a quantum computer is still in a premature stage, such quantum neural network models are mostly theoretical proposals that await their full implementation in physical experiments.

<span class="mw-page-title-main">Gordon Pask</span> British cybernetician and psychologist (1928–1996)

Andrew Gordon Speedie Pask was a British cybernetician, inventor and polymath who made during his lifetime multiple contributions to cybernetics, educational psychology, educational technology, epistemology, chemical computing, architecture, and the performing arts. During his life he gained three doctorate degrees. He was an avid writer, with more than two hundred and fifty publications which included a variety of journal articles, books, periodicals, patents, and technical reports. He also worked as an academic and researcher for a variety of educational settings, research institutes, and private stakeholders including but not limited to the University of Illinois, Concordia University, the Open University, Brunel University and the Architectural Association School of Architecture. He is known for the development of conversation theory.

Dr. Lawrence Jerome Fogel was a pioneer in evolutionary computation and human factors analysis. He is known as the inventor of active noise cancellation and the father of evolutionary programming. His scientific career spanned nearly six decades and included electrical engineering, aerospace engineering, communication theory, human factors research, information processing, cybernetics, biotechnology, artificial intelligence, and computer science.

<span class="mw-page-title-main">Cybernetics</span> Transdisciplinary field concerned with regulatory and purposive systems

Cybernetics is a wide-ranging field concerned with circular causal processes such as feedback. Norbert Wiener named the field after an example of circular causal feedback—that of steering a ship where the helmsman adjusts their steering in response to the effect it is observed as having, enabling a steady course to be maintained amongst disturbances such as cross-winds or the tide.

<span class="mw-page-title-main">Deep learning</span> Branch of machine learning

Deep learning is part of a broader family of machine learning methods, which is based on artificial neural networks with representation learning. The adjective "deep" in deep learning refers to the use of multiple layers in the network. Methods used can be either supervised, semi-supervised or unsupervised.

<span class="mw-page-title-main">Restricted Boltzmann machine</span> Class of artificial neural network

A restricted Boltzmann machine (RBM) is a generative stochastic artificial neural network that can learn a probability distribution over its set of inputs.

<span class="mw-page-title-main">Glossary of artificial intelligence</span> List of definitions of terms and concepts commonly used in the study of artificial intelligence

This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence, its sub-disciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.

<span class="mw-page-title-main">Outline of machine learning</span> Overview of and topical guide to machine learning

The following outline is provided as an overview of and topical guide to machine learning. Machine learning is a subfield of soft computing within computer science that evolved from the study of pattern recognition and computational learning theory in artificial intelligence. In 1959, Arthur Samuel defined machine learning as a "field of study that gives computers the ability to learn without being explicitly programmed". Machine learning explores the study and construction of algorithms that can learn from and make predictions on data. Such algorithms operate by building a model from an example training set of input observations in order to make data-driven predictions or decisions expressed as outputs, rather than following strictly static program instructions.

The First International Congress on Cybernetics was held in Namur, Belgium, 26–29 June 1956. It led to the formation of the International Association for Cybernetics which was incorporated in Belgium on 6 January 1957.

References

  1. 1 2 "Albert Maurel Uttley". geni_family_tree. Geni.com. Retrieved 1 July 2019.
  2. Husbands, Phil; Holland, Owen (2008). Husbands, Phil; Holland, Owen; Wheeler, M (eds.). "The Ratio Club: A Hub of British Cybernetics". The Mechanical Mind in History. MIT Press: 91–148. doi:10.7551/mitpress/9780262083775.003.0006. ISBN   9780262083775.
  3. Kline, Ronald (April 2011). "Cybernetics, Automata Studies, and the Dartmouth Conference on Artificial Intelligence". IEEE Annals of the History of Computing. 33 (4): 5–16. doi:10.1109/MAHC.2010.44. ISSN   1934-1547.
  4. Cowan, Jack D.; Sharp, David H. (1988). "Neural Nets and Artificial Intelligence". Daedalus. 117 (1): 85–121. ISSN   0011-5266.