Geoffrey Hinton

Last updated

Geoffrey Hinton

Geoffrey Hinton at UBC.jpg
Hinton in 2013
Born
Geoffrey Everest Hinton

(1947-12-06) 6 December 1947 (age 72) [1]
Alma mater
Known for
Awards
Scientific career
Fields
Institutions University of Toronto
Google
Carnegie Mellon University
University College London
University of California, San Diego
Thesis Relaxation and its role in vision  (1977)
Doctoral advisor Christopher Longuet-Higgins [3] [4] [5]
Doctoral students
Other notable students
Website www.cs.toronto.edu/~hinton/

Geoffrey Everest Hinton CC FRS FRSC [11] (born 6 December 1947) is an English Canadian cognitive psychologist and computer scientist, most noted for his work on artificial neural networks. Since 2013 he divides his time working for Google (Google Brain) and the University of Toronto. In 2017, he cofounded and became the Chief Scientific Advisor of the Vector Institute in Toronto. [12] [13]

Contents

With David Rumelhart and Ronald J. Williams, Hinton was co-author of a highly cited paper published in 1986 that popularized the backpropagation algorithm for training multi-layer neural networks, [14] although they were not the first to propose the approach. [15] Hinton is viewed by some as a leading figure in the deep learning community and is referred to by some as the "Godfather of Deep Learning". [16] [17] [18] [19] [20] The dramatic image-recognition milestone of the AlexNet designed by his student Alex Krizhevsky [21] for the ImageNet challenge 2012 [22] helped to revolutionize the field of computer vision. [23] Hinton was awarded the 2018 Turing Award alongside Yoshua Bengio and Yann LeCun for their work on deep learning. [24]

Hinton—together with Yoshua Bengio and Yann LeCun—are referred to by some as the "Godfathers of AI" and "Godfathers of Deep Learning". [25]

Education

Hinton was educated at King's College, Cambridge, graduating in 1970 with a Bachelor of Arts in experimental psychology. [1] He continued his study at the University of Edinburgh where he was awarded a PhD in artificial intelligence in 1978 for research supervised by Christopher Longuet-Higgins. [3] [26]

Career and research

After his PhD he worked at the University of Sussex, and (after difficulty finding funding in Britain) [27] the University of California, San Diego, and Carnegie Mellon University. [1] He was the founding director of the Gatsby Charitable Foundation Computational Neuroscience Unit at University College London, [1] and is currently [28] a professor in the computer science department at the University of Toronto. He holds a Canada Research Chair in Machine Learning, and is currently an advisor for the Learning in Machines & Brains program at the Canadian Institute for Advanced Research. Hinton taught a free online course on Neural Networks on the education platform Coursera in 2012. [29] Hinton joined Google in March 2013 when his company, DNNresearch Inc., was acquired. He is planning to "divide his time between his university research and his work at Google". [30]

Hinton's research investigates ways of using neural networks for machine learning, memory, perception and symbol processing. He has authored or co-authored over 200 peer reviewed publications. [2] [31]

While Hinton was a professor at Carnegie Mellon University (1982–1987), David E. Rumelhart and Hinton and Ronald J. Williams applied the backpropagation algorithm to multi-layer neural networks. Their experiments showed that such networks can learn useful internal representations of data. [14] In an interview of 2018, [32] Hinton said that "David E. Rumelhart came up with the basic idea of backpropagation, so it's his invention." Although this work was important in popularizing backpropagation, it was not the first to suggest the approach. [15] Reverse-mode automatic differentiation, of which backpropagation is a special case, was proposed by Seppo Linnainmaa in 1970, and Paul Werbos proposed to use it to train neural networks in 1974. [15]

During the same period, Hinton co-invented Boltzmann machines with David Ackley and Terry Sejnowski. [33] His other contributions to neural network research include distributed representations, time delay neural network, mixtures of experts, Helmholtz machines and Product of Experts. In 2007 Hinton coauthored an unsupervised learning paper titled Unsupervised learning of image transformations. [34] An accessible introduction to Geoffrey Hinton's research can be found in his articles in Scientific American in September 1992 and October 1993. [35]

In October and November 2017 respectively, Hinton published two open access research papers [36] [37] on the theme of capsule neural networks, which according to Hinton are "finally something that works well." [38]

Notable former PhD students and postdoctoral researchers from his group include Richard Zemel, [3] [6] Brendan Frey, [7] Radford M. Neal, [8] Ruslan Salakhutdinov, [9] Ilya Sutskever, [10] Yann LeCun [39] and Zoubin Ghahramani.

Honours and awards

From left to right Russ Salakhutdinov, Richard S. Sutton, Geoffrey Hinton, Yoshua Bengio and Steve Jurvetson in 2016 Deep Thinkers on Deep Learning.jpg
From left to right Russ Salakhutdinov, Richard S. Sutton, Geoffrey Hinton, Yoshua Bengio and Steve Jurvetson in 2016

Hinton was elected a Fellow of the Royal Society (FRS) in 1998. [11] He was the first winner of the Rumelhart Prize in 2001. [40] His certificate of election for the Royal Society reads:

Geoffrey E. Hinton is internationally distinguished for his work on artificial neural nets, especially how they can be designed to learn without the aid of a human teacher. This may well be the start of autonomous intelligent brain-like machines. He has compared effects of brain damage with effects of losses in such a net, and found striking similarities with human impairment, such as for recognition of names and losses of categorization. His work includes studies of mental imagery, and inventing puzzles for testing originality and creative intelligence. It is conceptual, mathematically sophisticated and experimental. He brings these skills together with striking effect to produce important work of great interest. [41]

In 2001, Hinton was awarded an Honorary Doctorate from the University of Edinburgh. [42] He was the 2005 recipient of the IJCAI Award for Research Excellence lifetime-achievement award.[ citation needed ] He has also been awarded the 2011 Herzberg Canada Gold Medal for Science and Engineering. [43] In 2013, Hinton was awarded an Honorary Doctorate from the Université de Sherbrooke.[ citation needed ]

In 2016, he was elected a foreign member of National Academy of Engineering "For contributions to the theory and practice of artificial neural networks and their application to speech recognition and computer vision". [44] He also received the 2016 IEEE/RSE Wolfson James Clerk Maxwell Award. [45]

He has won the BBVA Foundation Frontiers of Knowledge Award (2016) in the Information and Communication Technologies category "for his pioneering and highly influential work" to endow machines with the ability to learn.[ citation needed ]

Together with Yann LeCun, and Yoshua Bengio, Hinton won the 2018 Turing Award for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing. [46] [47] [48]

In 2018, he was awarded a Companion of the Order of Canada. [49]

Personal life

Hinton is the great-great-grandson both of logician George Boole whose work eventually became one of the foundations of modern computer science, and of surgeon and author James Hinton. [50] who was the father of Charles Howard Hinton. Hinton's father was Howard Hinton. [1] [51] His middle name is from another relative, George Everest. [27] He is the nephew of the economist Colin Clark. [52] He lost his first wife to ovarian cancer in 1994. [53]

Views

Hinton moved from the U.S. to Canada in part due to disillusionment with Ronald Reagan-era politics and disapproval of military funding of artificial intelligence. [27]

Hinton has petitioned against lethal autonomous weapons. Regarding existential risk from artificial intelligence, Hinton typically declines to make predictions more than five years into the future, noting that exponential progress makes the uncertainty too great. [54] However, in an informal conversation with the AI risk researcher Nick Bostrom in November 2015, overheard by journalist Raffi Khatchadourian, [55] he is reported to have stated that he did not expect general A.I. to be achieved for decades (“No sooner than 2070”), and that, in the context of a dichotomy earlier introduced by Bostrom between people who think managing existential risk from artificial intelligence is probably hopeless versus easy enough that it will be solved automatically, Hinton "[is] in the camp that is hopeless.” [55] He has stated, “I think political systems will use it to terrorize people” and has expressed his belief that agencies like the National Security Agency (NSA) are already attempting to abuse similar technology. [55]

Asked by Nick Bostrom why he continues research despite his grave concerns, Hinton stated, "I could give you the usual arguments. But the truth is that the prospect of discovery is too sweet." [55]

According to the same report, Hinton does not categorically rule out human beings controlling an artificial superintelligence, but warns that "there is not a good track record of less intelligent things controlling things of greater intelligence". [55]

Related Research Articles

Jürgen Schmidhuber German computer scientist

Jürgen Schmidhuber is a computer scientist most noted for his work in the field of artificial intelligence, deep learning and artificial neural networks. He is a co-director of the Dalle Molle Institute for Artificial Intelligence Research in Manno, in the district of Lugano, in Ticino in southern Switzerland. He is sometimes called the "father of (modern) AI" or, one time, the "father of deep learning."

In machine learning, backpropagation is a widely used algorithm in training feedforward neural networks for supervised learning. Generalizations of backpropagation exist for other artificial neural networks (ANNs), and for functions generally – a class of algorithms referred to generically as "backpropagation". In fitting a neural network, backpropagation computes the gradient of the loss function with respect to the weights of the network for a single input–output example, and does so efficiently, unlike a naive direct computation of the gradient with respect to each weight individually. This efficiency makes it feasible to use gradient methods for training multilayer networks, updating weights to minimize loss; gradient descent, or variants such as stochastic gradient descent, are commonly used. The backpropagation algorithm works by computing the gradient of the loss function with respect to each weight by the chain rule, computing the gradient one layer at a time, iterating backward from the last layer to avoid redundant calculations of intermediate terms in the chain rule; this is an example of dynamic programming.

Neural network Structure in biology and artificial intelligence

A neural network is a network or circuit of neurons, or in a modern sense, an artificial neural network, composed of artificial neurons or nodes. Thus a neural network is either a biological neural network, made up of real biological neurons, or an artificial neural network, for solving artificial intelligence (AI) problems. The connections of the biological neuron are modeled as weights. A positive weight reflects an excitatory connection, while negative values mean inhibitory connections. All inputs are modified by a weight and summed. This activity is referred to as a linear combination. Finally, an activation function controls the amplitude of the output. For example, an acceptable range of output is usually between 0 and 1, or it could be −1 and 1.

David Everett Rumelhart was an American psychologist who made many contributions to the formal analysis of human cognition, working primarily within the frameworks of mathematical psychology, symbolic artificial intelligence, and parallel distributed processing. He also admired formal linguistic approaches to cognition, and explored the possibility of formulating a formal grammar to capture the structure of stories.

The neocognitron is a hierarchical, multilayered artificial neural network proposed by Kunihiko Fukushima in 1979. It has been used for handwritten character recognition and other pattern recognition tasks, and served as the inspiration for convolutional neural networks.

CIFAR is a charitable organization based in Canada that brings together teams of top researchers from around the world to address important and complex questions. It was founded in 1982 and is supported by individuals, foundations and corporations, as well as funding from the Government of Canada and the provinces of Quebec, British Columbia and Alberta.

Léon Bottou is a researcher best known for his work in machine learning and data compression. His work presents stochastic gradient descent as a fundamental learning algorithm. He is also one of the main creators of the DjVu image compression technology, and the maintainer of DjVuLibre, the open source implementation of DjVu. He is the original developer of the Lush programming language.

Yann LeCun computer scientist working in machine learning and computer vision

Yann André LeCun is a French-American computer scientist working primarily in the fields of machine learning, computer vision, mobile robotics, and computational neuroscience. He is the Silver Professor of the Courant Institute of Mathematical Sciences at New York University, and Vice President, Chief AI Scientist at Facebook.

Kunihiko Fukushima is a Japanese computer scientist, most noted for his work on artificial neural networks and deep learning. He is currently working part-time as a Senior Research Scientist at the Fuzzy Logic Systems Institute in Tokyo.

There are many types of artificial neural networks (ANN).

Deep learning is part of a broader family of machine learning methods based on artificial neural networks with representation learning. Learning can be supervised, semi-supervised or unsupervised.

Google Brain is a deep learning artificial intelligence research team at Google. Formed in the early 2010s, Google Brain combines open-ended machine learning research with information systems and large-scale computing resources.

Yoshua Bengio Canadian computer scientist

Yoshua Bengio is a Canadian computer scientist, most noted for his work on artificial neural networks and deep learning. He was a co-recipient of the 2018 ACM A.M. Turing Award for his work in deep learning. He is a professor at the Department of Computer Science and Operations Research at the Université de Montréal and scientific director of the Montreal Institute for Learning Algorithms (MILA).

Alex Graves is a research scientist at DeepMind. He did a BSc in Theoretical Physics at Edinburgh and obtained a PhD in AI under Jürgen Schmidhuber at IDSIA. He was also a postdoc at TU Munich and under Geoffrey Hinton at the University of Toronto.

This page is a timeline of machine learning. Major discoveries, achievements, milestones and other major events are included.

Ian Goodfellow American computer scientist

Ian J. Goodfellow is a researcher working in machine learning, currently employed at Apple Inc. as its director of machine learning in the Special Projects Group. He was previously employed as a research scientist at Google Brain. He has made several contributions to the field of deep learning.

Wojciech Zaremba Polish mathematician and computer scientist

Wojciech Zaremba is a co-founder of OpenAI (2016-now), where he leads the robotics team. His team is working on developing general purpose robots via new approaches to transfer learning and teaching robots complex behaviors. The mission of OpenAI is to build safe artificial intelligence (AI), and ensure that its benefits are as evenly distributed as possible.

AlexNet is the name of a convolutional neural network (CNN), designed by Alex Krizhevsky, and published with Ilya Sutskever and Krizhevsky's doctoral advisor Geoffrey Hinton.

The history of artificial neural networks (ANN) began with Warren McCulloch and Walter Pitts (1943) who created a computational model for neural networks based on algorithms called threshold logic. This model paved the way for research to split into two approaches. One approach focused on biological processes while the other focused on the application of neural networks to artificial intelligence. This work led to work on nerve networks and their link to finite automata.

LeNet is a convolutional neural network structure proposed by Yann LeCun et al. in 1998. In general, LeNet refers to lenet-5 and is a simple convolutional neural network. Convolutional neural networks are a kind of feed-forward neural network whose artificial neurons can respond to a part of the surrounding cells in the coverage range and perform well in large-scale image processing.

References

  1. 1 2 3 4 5 Anon (2015) "Hinton, Prof. Geoffrey Everest". Who's Who (online Oxford University Press ed.). A & C Black, an imprint of Bloomsbury Publishing plc.(subscription or UK public library membership required) doi:10.1093/ww/9780199540884.013.20261 (subscription required)
  2. 1 2 Geoffrey Hinton publications indexed by Google Scholar OOjs UI icon edit-ltr-progressive.svg
  3. 1 2 3 Geoffrey Hinton at the Mathematics Genealogy Project
  4. Geoffrey E. Hinton's Academic Genealogy
  5. Gregory, R. L.; Murrell, J. N. (2006). "Hugh Christopher Longuet-Higgins. 11 April 1923 -- 27 March 2004: Elected FRS 1958". Biographical Memoirs of Fellows of the Royal Society . 52: 149–166. doi: 10.1098/rsbm.2006.0012 .
  6. 1 2 Zemel, Richard Stanley (1994). A minimum description length framework for unsupervised learning (PhD thesis). University of Toronto. OCLC   222081343. ProQuest   304161918.
  7. 1 2 Frey, Brendan John (1998). Bayesian networks for pattern classification, data compression, and channel coding (PhD thesis). University of Toronto. OCLC   46557340. ProQuest   304396112.
  8. 1 2 Neal, Radford (1995). Bayesian learning for neural networks (PhD thesis). University of Toronto. OCLC   46499792. ProQuest   304260778.
  9. 1 2 Salakhutdinov, Ruslan (2009). Learning deep generative models (PhD thesis). University of Toronto. ISBN   9780494610800. OCLC   785764071. ProQuest   577365583.
  10. 1 2 Sutskever, Ilya (2013). Training Recurrent Neural Networks (PhD thesis). University of Toronto. OCLC   889910425. ProQuest   1501655550.
  11. 1 2 Anon (1998). "Professor Geoffrey Hinton FRS". London: Royal Society. Archived from the original on 3 November 2015. One or more of the preceding sentences incorporates text from the royalsociety.org website where:
    "All text published under the heading 'Biography' on Fellow profile pages is available under Creative Commons Attribution 4.0 International License." -- "Royal Society Terms, conditions and policies". Archived from the original on 11 November 2016. Retrieved 9 March 2016.CS1 maint: BOT: original-url status unknown (link)
  12. Daniela Hernandez (7 May 2013). "The Man Behind the Google Brain: Andrew Ng and the Quest for the New AI". Wired . Retrieved 10 May 2013.
  13. "Geoffrey E. Hinton – Google AI". Google AI.
  14. 1 2 Rumelhart, David E.; Hinton, Geoffrey E.; Williams, Ronald J. (9 October 1986). "Learning representations by back-propagating errors". Nature. 323 (6088): 533–536. Bibcode:1986Natur.323..533R. doi:10.1038/323533a0. ISSN   1476-4687.
  15. 1 2 3 Schmidhuber, Jürgen (1 January 2015). "Deep learning in neural networks: An overview". Neural Networks. 61: 85–117. arXiv: 1404.7828 . doi:10.1016/j.neunet.2014.09.003. PMID   25462637.
  16. "Geoffrey Hinton was briefly a Google intern in 2012 because of bureaucracy – TechCrunch". techcrunch.com. Retrieved 28 March 2018.
  17. Somers, James. "Progress in AI seems like it's accelerating, but here's why it could be plateauing". MIT Technology Review. Retrieved 28 March 2018.
  18. "How U of T's 'godfather' of deep learning is reimagining AI". University of Toronto News. Retrieved 28 March 2018.
  19. "'Godfather' of deep learning is reimagining AI" . Retrieved 28 March 2018.
  20. "Geoffrey Hinton, the 'godfather' of deep learning, on AlphaGo". Macleans.ca. 18 March 2016. Retrieved 28 March 2018.
  21. Dave Gershgorn (18 June 2018). "The inside story of how AI got good enough to dominate Silicon Valley". Quartz . Retrieved 5 October 2018.
  22. Krizhevsky, Alex; Sutskever, Ilya; Hinton, Geoffrey E. (3 December 2012). "ImageNet classification with deep convolutional neural networks". Nips'12. Curran Associates Inc.: 1097–1105.Cite journal requires |journal= (help)
  23. "How a Toronto professor's research revolutionized artificial intelligence | Toronto Star". thestar.com. Retrieved 13 March 2018.
  24. 27 Mar, Emily Chung · CBC News · Posted; March 27, 2019 6:00 AM ET | Last Updated. "Canadian researchers who taught AI to learn like humans win $1M award | CBC News". CBC. Retrieved 27 March 2019.
  25. Hinton, Geoffrey Everest (1977). Relaxation and its role in vision (PhD thesis). University of Edinburgh. hdl:1842/8121. OCLC   18656113. EThOS   uk.bl.ethos.482889. Lock-green.svg
  26. 1 2 3 Smith, Craig S. (23 June 2017). "The Man Who Helped Turn Toronto into a High-Tech Hotbed". The New York Times . Retrieved 27 June 2017.
  27. https://www.cs.toronto.edu/~hinton/fullcv.pdf
  28. "Archived copy". Archived from the original on 31 December 2016. Retrieved 30 December 2016.CS1 maint: archived copy as title (link)
  29. "U of T neural networks start-up acquired by Google" (Press release). Toronto, ON. 12 March 2013. Retrieved 13 March 2013.
  30. Geoffrey Hinton publications indexed by the Scopus bibliographic database. (subscription required)
  31. Ford, Martin (2018). Architects of Intelligence: The truth about AI from the people building it. Packt Publishing. ISBN   978-1-78913-151-2.
  32. Ackley, David H; Hinton Geoffrey E; Sejnowski, Terrence J (1985), "A learning algorithm for Boltzmann machines", Cognitive science, Elsevier, 9 (1): 147–169
  33. Hinton, Geoffrey E. "Geoffrey E. Hinton's Publications in Reverse Chronological Order".
  34. "Stories by Geoffrey E. Hinton in Scientific American".
  35. Sabour, Sara; Frosst, Nicholas; Hinton, Geoffrey. October 2017. "Dynamic Routing Between Capsules"
  36. "Matrix capsules with EM routing" 3 November 2017. OpenReview.net
  37. Geib, Claudia. 2 November 2017. "We’ve Finally Created an AI Network That’s Been Decades in the Making" Futurism.com
  38. "Yann LeCun's Research and Contributions". yann.lecun.com. Retrieved 13 March 2018.
  39. "Current and Previous Recipients". David E. Rumelhart Prize. Archived from the original on 2 March 2017.
  40. Anon (1998). "Certificate of election EC/1998/21: Geoffrey Everest Hinton". London: Royal Society. Archived from the original on 5 November 2015.
  41. "Distinguished Edinburgh graduate receives ACM A.M. Turing Award" . Retrieved 9 April 2019.
  42. "Artificial intelligence scientist gets M prize". CBC News. 14 February 2011.
  43. "National Academy of Engineering Elects 80 Members and 22 Foreign Members". NAE. 8 February 2016.
  44. "2016 IEEE Medals and Recognitions Recipients and Citations" (PDF). IEEE . Retrieved 7 July 2016.
  45. "Vector Institutes Chief Scientific Advisor Dr.Geoffrey Hinton Receives ACM A.M. Turing Award Alongside Dr.Yoshua Bengio and Dr.Yann Lecun". NAE. 27 March 2019.
  46. "Three Pioneers in Artificial Intelligence Win Turing Award". New York Times . 27 March 2019. Retrieved 27 March 2019.
  47. "Fathers of the Deep Learning Revolution Receive ACM A.M. Turing Award - Bengio, Hinton and LeCun Ushered in Major Breakthroughs in Artificial Intelligence". Association for Computing Machinery . 27 March 2019. Retrieved 27 March 2019.
  48. "Governor General Announces 103 New Appointments to the Order of Canada, December 2018".
  49. The Isaac Newton of logic
  50. Salt, George (1978). "Howard Everest Hinton. 24 August 1912-2 August 1977". Biographical Memoirs of Fellows of the Royal Society . 24: 150–182. doi: 10.1098/rsbm.1978.0006 . ISSN   0080-4606.
  51. Shute, Joe (26 August 2017). "The 'Godfather of AI' on making machines clever and whether robots really will learn to kill us all?". The Telegraph . Retrieved 20 December 2017.
  52. Shute, Joe (26 August 2017). "The 'Godfather of AI' on making machines clever and whether robots really will learn to kill us all?". The Telegraph. Retrieved 30 January 2018.
  53. Hinton, Geoffrey. "Lecture 16d The fog of progress" (PDF).
  54. 1 2 3 4 5 Khatchadourian, Raffi (16 November 2015). "The Doomsday Invention". The New Yorker. Retrieved 30 January 2018.