Geoffrey Hinton

Last updated

Geoffrey Hinton
Geoffrey E. Hinton, 2024 Nobel Prize Laureate in Physics (cropped1).jpg
Hinton speaking at the Nobel Prize Lectures in Stockholm in 2024
Born
Geoffrey Everest Hinton

(1947-12-06) 6 December 1947 (age 77) [1]
Wimbledon, London, England
Education Clifton College
Alma mater
Known for
Spouse(s)Joanne
Rosalind Zalin
(died 1994)

Jacqueline Ford
(m. 1997;died 2018)
Children2
Father H. E. Hinton
Relatives Colin Clark (uncle)
Awards
Scientific career
Fields
Institutions
Thesis Relaxation and its role in vision  (1977)
Doctoral advisor Christopher Longuet-Higgins
Doctoral students
Other notable students
Website www.cs.toronto.edu/~hinton/ OOjs UI icon edit-ltr-progressive.svg

Geoffrey Everest Hinton (born 6 December 1947) is a British-Canadian computer scientist, cognitive scientist, cognitive psychologist, known for his work on artificial neural networks which earned him the title as the "Godfather of AI".

Contents

Hinton is University Professor Emeritus at the University of Toronto. From 2013 to 2023, he divided his time working for Google (Google Brain) and the University of Toronto, before publicly announcing his departure from Google in May 2023, citing concerns about the many risks of artificial intelligence (AI) technology. [9] [10] In 2017, he co-founded and became the chief scientific advisor of the Vector Institute in Toronto. [11] [12]

With David Rumelhart and Ronald J. Williams, Hinton was co-author of a highly cited paper published in 1986 that popularised the backpropagation algorithm for training multi-layer neural networks, [13] although they were not the first to propose the approach. [14] Hinton is viewed as a leading figure in the deep learning community. [20] The image-recognition milestone of the AlexNet designed in collaboration with his students Alex Krizhevsky [21] and Ilya Sutskever for the ImageNet challenge 2012 [22] was a breakthrough in the field of computer vision. [23]

Hinton received the 2018 Turing Award, often referred to as the "Nobel Prize of Computing", together with Yoshua Bengio and Yann LeCun, for their work on deep learning. [24] They are sometimes referred to as the "Godfathers of Deep Learning", [25] [26] and have continued to give public talks together. [27] [28] He was also awarded the 2024 Nobel Prize in Physics, shared with John Hopfield. [29] [30]

In May 2023, Hinton announced his resignation from Google to be able to "freely speak out about the risks of A.I." [31] He has voiced concerns about deliberate misuse by malicious actors, technological unemployment, and existential risk from artificial general intelligence. [32] He noted that establishing safety guidelines will require cooperation among those competing in use of AI in order to avoid the worst outcomes. [33] After receiving the Nobel Prize, he called for urgent research into AI safety to figure out how to control AI systems smarter than humans. [34] [35]

Education

Hinton was educated at Clifton College in Bristol [36] and the University of Cambridge as an undergraduate student of King's College, Cambridge. After repeatedly changing his degree between different subjects like natural sciences, history of art, and philosophy, he eventually graduated with a BA degree in experimental psychology in 1970. [1] He continued his study at the University of Edinburgh where he was awarded a PhD in artificial intelligence in 1978 for research supervised by Christopher Longuet-Higgins. [37] [38]

Career and research

After his PhD, Hinton worked at the University of Sussex and at the MRC Applied Psychology Unit, and after difficulty finding funding in Britain, [39] the University of California, San Diego and Carnegie Mellon University. [1] He was the founding director of the Gatsby Charitable Foundation Computational Neuroscience Unit at University College London. [1] He is currently [40] University Professor Emeritus in the computer science department at the University of Toronto, where he has been affiliated since 1987. [41]

Upon arrival in Canada, Geoffrey Hinton was appointed at the Canadian Institute for Advanced Research (CIFAR) in 1987 as a Fellow in CIFAR's first research program, Artificial Intelligence, Robotics & Society. [42] In 2004, Hinton and collaborators successfully proposed the launch of a new program at CIFAR, Neural Computation and Adaptive Perception [43] (or NCAP, which today is named Learning in Machines & Brains). Hinton would go on to lead NCAP for ten years. [44] Among the members of the program are Yoshua Bengio and Yann LeCun, with whom Hinton would go on to win the ACM A.M. Turing Award in 2018. [45] All three Turing winners continue to be members of the CIFAR Learning in Machines and Brains program. [46]

Hinton taught a free online course on Neural Networks on the education platform Coursera in 2012. [47] He joined Google in March 2013 when his company, DNNresearch Inc., was acquired, and was at that time planning to "divide his time between his university research and his work at Google". [48]

Hinton's research concerns ways of using neural networks for machine learning, memory, perception, and symbol processing. He has written or co-written more than 200 peer reviewed publications. [2] [49]

While Hinton was a postdoc at UC San Diego, David E. Rumelhart and Hinton and Ronald J. Williams applied the backpropagation algorithm to multi-layer neural networks. Their experiments showed that such networks can learn useful internal representations of data. [13] In a 2018 interview, [50] Hinton said that "David E. Rumelhart came up with the basic idea of backpropagation, so it's his invention". Although this work was important in popularising backpropagation, it was not the first to suggest the approach. [14] Reverse-mode automatic differentiation, of which backpropagation is a special case, was proposed by Seppo Linnainmaa in 1970, and Paul Werbos proposed to use it to train neural networks in 1974. [14]

In 1985, Hinton co-invented Boltzmann machines with David Ackley and Terry Sejnowski. [51] His other contributions to neural network research include distributed representations, time delay neural network, mixtures of experts, Helmholtz machines and product of experts. [52] An accessible introduction to Geoffrey Hinton's research can be found in his articles in Scientific American in September 1992 and October 1993. [53] In 2007, Hinton coauthored an unsupervised learning paper titled Unsupervised learning of image transformations. [54] In 2008, he developed the visualization method t-SNE with Laurens van der Maaten. [55] [56]

In October and November 2017, Hinton published two open access research papers on the theme of capsule neural networks, [57] [58] which according to Hinton, are "finally something that works well". [59]

At the 2022 Conference on Neural Information Processing Systems (NeurIPS) he introduced a new learning algorithm for neural networks that he calls the "Forward-Forward" algorithm. The idea of the new algorithm is to replace the traditional forward-backward passes of backpropagation with two forward passes, one with positive (i.e. real) data and the other with negative data that could be generated solely by the network. [60] [61]

In May 2023, Hinton publicly announced his resignation from Google. He explained his decision by saying that he wanted to "freely speak out about the risks of A.I." and added that a part of him now regrets his life's work. [9] [31]

Notable former PhD students and postdoctoral researchers from his group include Peter Dayan, [62] Sam Roweis, [62] Max Welling, [62] Richard Zemel, [37] [3] Brendan Frey, [4] Radford M. Neal, [5] Yee Whye Teh, [6] Ruslan Salakhutdinov, [7] Ilya Sutskever, [8] Yann LeCun, [63] Alex Graves, [62] Zoubin Ghahramani, [62] and Peter Fitzhugh Brown. [64]

Honours and awards

In 2016, from left to right,
Russ Salakhutdinov, Richard S. Sutton, Geoffrey Hinton, Yoshua Bengio, and Steve Jurvetson Deep Thinkers on Deep Learning.jpg
In 2016, from left to right,
Russ Salakhutdinov, Richard S. Sutton, Geoffrey Hinton, Yoshua Bengio, and Steve Jurvetson

Hinton was elected a Fellow of the Royal Society (FRS) in 1998. [65] He was the first winner of the Rumelhart Prize in 2001. [66] His certificate of election for the Royal Society reads:

Geoffrey E. Hinton is internationally known for his work on artificial neural nets, especially how they can be designed to learn without the aid of a human teacher. He has compared effects of brain damage with effects of losses in such a net, and found striking similarities with human impairment, such as for recognition of names and losses of categorisation. His work includes studies of mental imagery, and inventing puzzles for testing originality and creative intelligence. It is conceptual, mathematically sophisticated, and experimental. He brings these skills together with striking effect to produce important work of great interest. [67]

In 2001, Hinton was awarded an honorary doctorate from the University of Edinburgh. [68] He was the 2005 recipient of the IJCAI Award for Research Excellence lifetime-achievement award. [69] He was awarded the 2011 Herzberg Canada Gold Medal for Science and Engineering. [70] In 2012, he received the Canada Council Killam Prize in Engineering. In 2013, Hinton was awarded an honorary doctorate from the Université de Sherbrooke. [71]

In 2016, he was elected a foreign member of National Academy of Engineering "for contributions to the theory and practice of artificial neural networks and their application to speech recognition and computer vision". [72] He received the 2016 IEEE/RSE Wolfson James Clerk Maxwell Award. [73]

He won the BBVA Foundation Frontiers of Knowledge Award (2016) in the Information and Communication Technologies category, "for his pioneering and highly influential work" to endow machines with the ability to learn. [74]

Together with Yann LeCun, and Yoshua Bengio, Hinton won the 2018 Turing Award for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing. [75] [76] [77]

In 2018, he became a Companion of the Order of Canada. [78] In 2021, he received the Dickson Prize in Science from the Carnegie Mellon University [79] and in 2022 the Princess of Asturias Award in the Scientific Research category, along with Yann LeCun, Yoshua Bengio, and Demis Hassabis. [80] In 2023, he was named an ACM Fellow. [81] In 2023, he was named a Highly Ranked Scholar by ScholarGPS for both lifetime and prior five years. [82]

In 2024, he was jointly awarded the Nobel Prize in Physics with John Hopfield "for foundational discoveries and inventions that enable machine learning with artificial neural networks." His development of the Boltzmann machine was explicitly mentioned in the citation. [29] [83] When the New York Times reporter Cade Metz asked Hinton to explain in simpler terms how the Boltzmann machine could "pretrain" backpropagation networks, Hinton quipped that Richard Feynman reportedly said: "Listen, buddy, if I could explain it in a couple of minutes, it wouldn't be worth the Nobel Prize." [84]

Views

Risks of artificial intelligence

External videos
Nuvola apps kaboodle.svg Geoffrey Hinton shares his thoughts on AI's benefits and dangers, 60 Minutes YouTube video

In 2023, Hinton expressed concerns about the rapid progress of AI. [32] [31] Hinton previously believed that artificial general intelligence (AGI) was "30 to 50 years or even longer away." [31] However, in a March 2023 interview with CBS, he stated that "general-purpose AI" may be fewer than 20 years away and could bring about changes "comparable in scale with the industrial revolution or electricity." [32]

In an interview with The New York Times published on 1 May 2023, [31] Hinton announced his resignation from Google so he could "talk about the dangers of AI without considering how this impacts Google." [85] He noted that "a part of him now regrets his life's work". [31] [10]

In early May 2023, Hinton claimed in an interview with BBC that AI might soon surpass the information capacity of the human brain. He described some of the risks posed by these chatbots as "quite scary". Hinton explained that chatbots have the ability to learn independently and share knowledge. This means that whenever one copy acquires new information, it is automatically disseminated to the entire group. This allows AI chatbots to have the capability to accumulate knowledge far beyond the capacity of any individual. [86]

Existential risk from AGI

Hinton has expressed concerns about the possibility of an AI takeover, stating that "it's not inconceivable" that AI could "wipe out humanity". [32] Hinton states that AI systems capable of intelligent agency will be useful for military or economic purposes. [87] He worries that generally intelligent AI systems could "create sub-goals" that are unaligned with their programmers' interests. [88] He states that AI systems may become power-seeking or prevent themselves from being shut off, not because programmers intended them to, but because those sub-goals are useful for achieving later goals. [86] In particular, Hinton says "we have to think hard about how to control" AI systems capable of self-improvement. [89]

Catastrophic misuse

Hinton reports concerns about deliberate misuse of AI by malicious actors, stating that "it is hard to see how you can prevent the bad actors from using [AI] for bad things." [31] In 2017, Hinton called for an international ban on lethal autonomous weapons. [90]

Economic impacts

Hinton was previously optimistic about the economic effects of AI, noting in 2018 that: "The phrase 'artificial general intelligence' carries with it the implication that this sort of single robot is suddenly going to be smarter than you. I don't think it's going to be that. I think more and more of the routine things we do are going to be replaced by AI systems." [91] Hinton also previously argued that AGI won't make humans redundant: "[AI in the future is] going to know a lot about what you're probably going to want to do... But it's not going to replace you." [91]

In 2023, however, Hinton became "worried that AI technologies will in time upend the job market" and take away more than just "drudge work." [31] He again stated in 2024 that the British government will have to establish a universal basic income to deal with the impact of AI on inequality. [92] In Hinton's view, AI will boost productivity and generate more wealth. But unless the government intervenes, it will only make the rich richer and hurt the people who might lose their jobs. "That's going to be very bad for society," he said. [93]

Politics

Hinton moved from the U.S. to Canada in part due to disillusionment with Ronald Reagan-era politics and disapproval of military funding of artificial intelligence. [39]

In August 2024, Hinton co-authored a letter with Yoshua Bengio, Stuart Russell, and Lawrence Lessig in support of SB 1047, a California AI safety bill that would require companies training models which cost more than $100 million to perform risk assessments before deployment. They claimed the legislation was the "bare minimum for effective regulation of this technology." [94] [95]

Personal life

Hinton's second wife, Rosalind Zalin, died of ovarian cancer in 1994; his third wife, Jacqueline "Jackie" Ford, died of pancreatic cancer in 2018. [96] [97]

Hinton is the great-great-grandson of the mathematician and educator Mary Everest Boole and her husband, the logician George Boole. [98] George Boole's work eventually became one of the foundations of modern computer science. Another great-great-grandfather of his was the surgeon and author James Hinton, [99] who was the father of the mathematician Charles Howard Hinton.

Hinton's father was the entomologist Howard Hinton. [1] [100] His middle name comes from another relative, George Everest, the Surveyor General of India after whom the mountain is named. [39] He is the nephew of the economist Colin Clark. [101]

Related Research Articles

Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.

<span class="mw-page-title-main">Jürgen Schmidhuber</span> German computer scientist

Jürgen Schmidhuber is a German computer scientist noted for his work in the field of artificial intelligence, specifically artificial neural networks. He is a scientific director of the Dalle Molle Institute for Artificial Intelligence Research in Switzerland. He is also director of the Artificial Intelligence Initiative and professor of the Computer Science program in the Computer, Electrical, and Mathematical Sciences and Engineering (CEMSE) division at the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia.

<span class="mw-page-title-main">John Hopfield</span> American scientist (born 1933)

John Joseph Hopfield is an American physicist and emeritus professor of Princeton University, most widely known for his study of associative neural networks in 1982. He is known for the development of the Hopfield network. Previous to its invention, research in artificial intelligence (AI) was in a decay period or AI winter, Hopfield work revitalized large scale interest in this field.

<span class="mw-page-title-main">Boltzmann machine</span> Type of stochastic recurrent neural network

A Boltzmann machine, named after Ludwig Boltzmann is a spin-glass model with an external field, i.e., a Sherrington–Kirkpatrick model, that is a stochastic Ising model. It is a statistical physics technique applied in the context of cognitive science. It is also classified as a Markov random field.

<span class="mw-page-title-main">Demis Hassabis</span> British artificial intelligence researcher (born 1976)

Sir Demis Hassabis is a British artificial intelligence (AI) researcher, and entrepreneur. He is the chief executive officer and co-founder of Google DeepMind, and Isomorphic Labs, and a UK Government AI Adviser. In 2024, Hassabis and John M. Jumper were jointly awarded the Nobel Prize in Chemistry for their AI research contributions for protein structure prediction.

The Canadian Institute for Advanced Research (CIFAR) is a Canadian-based global research organization that brings together teams of top researchers from around the world to address important and complex questions. It was founded in 1982 and is supported by individuals, foundations and corporations, as well as funding from the Government of Canada and the provinces of Alberta and Quebec.

<span class="mw-page-title-main">Zoubin Ghahramani</span> British-Iranian machine learning researcher

Zoubin Ghahramani FRS is a British-Iranian researcher and Professor of Information Engineering at the University of Cambridge. He holds joint appointments at University College London and the Alan Turing Institute. and has been a Fellow of St John's College, Cambridge since 2009. He was Associate Research Professor at Carnegie Mellon University School of Computer Science from 2003 to 2012. He was also the Chief Scientist of Uber from 2016 until 2020. He joined Google Brain in 2020 as senior research director. He is also Deputy Director of the Leverhulme Centre for the Future of Intelligence.

<span class="mw-page-title-main">Yann LeCun</span> French computer scientist (born 1960)

Yann André LeCun is a French-American computer scientist working primarily in the fields of machine learning, computer vision, mobile robotics and computational neuroscience. He is the Silver Professor of the Courant Institute of Mathematical Sciences at New York University and Vice President, Chief AI Scientist at Meta.

The AAAI Conference on Artificial Intelligence (AAAI) is a leading international academic conference in artificial intelligence held annually. It ranks 4th in terms of H5 Index in Google Scholar's list of top AI publications, after ICLR, NeurIPS, and ICML. It is supported by the Association for the Advancement of Artificial Intelligence. Precise dates vary from year to year, but paper submissions are generally due at the end of August to beginning of September, and the conference is generally held during the following February. The first AAAI was held in 1980 at Stanford University, Stanford California.

<span class="mw-page-title-main">Deep learning</span> Branch of machine learning

Deep learning is a subset of machine learning that focuses on utilizing neural networks to perform tasks such as classification, regression, and representation learning. The field takes inspiration from biological neuroscience and is centered around stacking artificial neurons into layers and "training" them to process data. The adjective "deep" refers to the use of multiple layers in the network. Methods used can be either supervised, semi-supervised or unsupervised.

Google Brain was a deep learning artificial intelligence research team that served as the sole AI branch of Google before being incorporated under the newer umbrella of Google AI, a research division at Google dedicated to artificial intelligence. Formed in 2011, it combined open-ended machine learning research with information systems and large-scale computing resources. It created tools such as TensorFlow, which allow neural networks to be used by the public, and multiple internal AI research projects, and aimed to create research opportunities in machine learning and natural language processing. It was merged into former Google sister company DeepMind to form Google DeepMind in April 2023.

<span class="mw-page-title-main">Yoshua Bengio</span> Canadian computer scientist

Yoshua Bengio is a Canadian computer scientist, most noted for his work on artificial neural networks and deep learning. He is a professor at the Department of Computer Science and Operations Research at the Université de Montréal and scientific director of the Montreal Institute for Learning Algorithms (MILA).

Alex Graves is a computer scientist.

This page is a timeline of machine learning. Major discoveries, achievements, milestones and other major events in machine learning are included.

Ilya Sutskever is a Canadian-Israeli-Russian computer scientist who specializes in machine learning.

<span class="mw-page-title-main">Ian Goodfellow</span> American computer scientist (born 1987)

Ian J. Goodfellow is an American computer scientist, engineer, and executive, most noted for his work on artificial neural networks and deep learning. He is a research scientist at Google DeepMind, was previously employed as a research scientist at Google Brain and director of machine learning at Apple, and has made several important contributions to the field of deep learning, including the invention of the generative adversarial network (GAN). Goodfellow co-wrote, as the first author, the textbook Deep Learning (2016) and wrote the chapter on deep learning in the authoritative textbook of the field of artificial intelligence, Artificial Intelligence: A Modern Approach.

<span class="mw-page-title-main">AlexNet</span> An influential convolutional neural network published in 2012

AlexNet is a convolutional neural network (CNN) architecture, designed by Alex Krizhevsky in collaboration with Ilya Sutskever and Geoffrey Hinton, who was Krizhevsky's Ph.D. advisor at the University of Toronto in 2012. It had 60 million parameters and 650,000 neurons.

Artificial neural networks (ANNs) are models created using machine learning to perform a number of tasks. Their creation was inspired by biological neural circuitry. While some of the computational implementations ANNs relate to earlier discoveries in mathematics, the first implementation of ANNs was by psychologist Frank Rosenblatt, who developed the perceptron. Little research was conducted on ANNs in the 1970s and 1980s, with the AAAI calling this period an "AI winter".

The Boole family is an English family with number of illustrious scientists, intellectuals and artists. This is the family of George Boole, a mathematician, philosopher and logician. Boole's Boolean Algebra laid the foundation of modern computer science.

<span class="mw-page-title-main">Vector Institute (Canada)</span> Research institute in Toronto, Canada

The Vector Institute is a private, non-profit artificial intelligence research institute in Toronto focusing primarily on machine learning and deep learning research. As of 2023, it consists of 143 faculty members and affiliates — 38 of which are CIFAR AI chairs — 57 postdoctoral fellows, and 502 students. Along with the University of Toronto, the Vector Institute is affiliated with faculty from universities across Ontario, as well as British Columbia and Nova Scotia.

References

  1. 1 2 3 4 5 "Hinton, Prof. Geoffrey Everest" . Who's Who (176th ed.). Oxford University Press. 2023. doi:10.1093/ww/9780199540884.013.20261.(Subscription or UK public library membership required.)
  2. 1 2 Geoffrey Hinton publications indexed by Google Scholar OOjs UI icon edit-ltr-progressive.svg
  3. 1 2 Zemel, Richard Stanley (1994). A minimum description length framework for unsupervised learning (PhD thesis). University of Toronto. OCLC   222081343. ProQuest   304161918.
  4. 1 2 Frey, Brendan John (1998). Bayesian networks for pattern classification, data compression, and channel coding (PhD thesis). University of Toronto. OCLC   46557340. ProQuest   304396112.
  5. 1 2 Neal, Radford (1995). Bayesian learning for neural networks (PhD thesis). University of Toronto. OCLC   46499792. ProQuest   304260778.
  6. 1 2 Whye Teh, Yee (2003). Bethe free energy and contrastive divergence approximations for undirected graphical models. utoronto.ca (PhD thesis). University of Toronto. hdl:1807/122253. OCLC   56683361. ProQuest   305242430. Archived from the original on 30 March 2023. Retrieved 30 March 2023.
  7. 1 2 Salakhutdinov, Ruslan (2009). Learning deep generative models (PhD thesis). University of Toronto. ISBN   978-0-494-61080-0. OCLC   785764071. ProQuest   577365583.
  8. 1 2 Sutskever, Ilya (2013). Training Recurrent Neural Networks. utoronto.ca (PhD thesis). University of Toronto. hdl:1807/36012. OCLC   889910425. ProQuest   1501655550. Archived from the original on 26 March 2023. Retrieved 30 March 2023.
  9. 1 2 Douglas Heaven, Will (1 May 2023). "Deep learning pioneer Geoffrey Hinton quits Google". MIT Technology Review . Archived from the original on 1 May 2023. Retrieved 1 May 2023.
  10. 1 2 Taylor, Josh; Hern, Alex (2 May 2023). "'Godfather of AI' Geoffrey Hinton quits Google and warns over dangers of misinformation". The Guardian. Retrieved 8 October 2024.
  11. Hernandez, Daniela (7 May 2013). "The Man Behind the Google Brain: Andrew Ng and the Quest for the New AI". Wired . Archived from the original on 8 February 2014. Retrieved 10 May 2013.
  12. "Geoffrey E. Hinton – Google AI". Google AI. Archived from the original on 9 November 2019. Retrieved 15 June 2018.
  13. 1 2 Rumelhart, David E.; Hinton, Geoffrey E.; Williams, Ronald J. (9 October 1986). "Learning representations by back-propagating errors". Nature . 323 (6088): 533–536. Bibcode:1986Natur.323..533R. doi:10.1038/323533a0. ISSN   1476-4687. S2CID   205001834.
  14. 1 2 3 Schmidhuber, Jürgen (1 January 2015). "Deep learning in neural networks: An overview". Neural Networks . 61: 85–117. arXiv: 1404.7828 . doi:10.1016/j.neunet.2014.09.003. PMID   25462637. S2CID   11715509.
  15. Mannes, John (14 September 2017). "Geoffrey Hinton was briefly a Google intern in 2012 because of bureaucracy – TechCrunch". TechCrunch . Archived from the original on 17 March 2020. Retrieved 28 March 2018.
  16. Somers, James (29 September 2017). "Progress in AI seems like it's accelerating, but here's why it could be plateauing". MIT Technology Review . Archived from the original on 20 May 2018. Retrieved 28 March 2018.
  17. Sorensen, Chris (2 November 2017). "How U of T's 'godfather' of deep learning is reimagining AI". University of Toronto News. Archived from the original on 6 April 2019. Retrieved 28 March 2018.
  18. Sorensen, Chris (3 November 2017). "'Godfather' of deep learning is reimagining AI". Phys.org . Archived from the original on 13 April 2019. Retrieved 28 March 2018.
  19. Lee, Adrian (18 March 2016). "Geoffrey Hinton, the 'godfather' of deep learning, on AlphaGo". Maclean's . Archived from the original on 6 March 2020. Retrieved 28 March 2018.
  20. [15] [16] [17] [18] [19]
  21. Gershgorn, Dave (18 June 2018). "The inside story of how AI got good enough to dominate Silicon Valley". Quartz . Archived from the original on 12 December 2019. Retrieved 5 October 2018.
  22. Krizhevsky, Alex; Sutskever, Ilya; Hinton, Geoffrey E. (3 December 2012). "ImageNet classification with deep convolutional neural networks". In F. Pereira; C. J. C. Burges; L. Bottou; K. Q. Weinberger (eds.). NIPS'12: Proceedings of the 25th International Conference on Neural Information Processing Systems. Vol. 1. Curran Associates. pp. 1097–1105. Archived from the original on 20 December 2019. Retrieved 13 March 2018.
  23. Allen, Kate (17 April 2015). "How a Toronto professor's research revolutionized artificial intelligence". Toronto Star . Archived from the original on 17 April 2015. Retrieved 13 March 2018.
  24. Chung, Emily (27 March 2019). "Canadian researchers who taught AI to learn like humans win $1M award". Canadian Broadcasting Corporation . Archived from the original on 26 February 2020. Retrieved 27 March 2019.
  25. Ranosa, Ted (29 March 2019). "Godfathers Of AI Win This Year's Turing Award And $1 Million". Tech Times. Archived from the original on 30 March 2019. Retrieved 5 November 2020.
  26. Shead, Sam (27 March 2019). "The 3 'Godfathers' Of AI Have Won The Prestigious $1M Turing Prize". Forbes . Archived from the original on 14 April 2020. Retrieved 5 November 2020.
  27. Ray, Tiernan (9 March 2021). "Nvidia's GTC will feature deep learning cabal of LeCun, Hinton, and Bengio". ZDNet . Archived from the original on 19 March 2021. Retrieved 7 April 2021.
  28. "50 Years at CMU: The Inaugural Raj Reddy Artificial Intelligence Lecture". Carnegie Mellon University. 18 November 2020. Archived from the original on 2 March 2022. Retrieved 2 March 2022.
  29. 1 2 "Press release: The Nobel Prize in Physics 2024". NobelPrize.org. Retrieved 8 October 2024.
  30. "Geoffrey Hinton from University of Toronto awarded Nobel Prize in Physics". CBC News. The Associated Press. 8 October 2024. Retrieved 8 October 2024.
  31. 1 2 3 4 5 6 7 8 Metz, Cade (1 May 2023). "'The Godfather of A.I.' Leaves Google and Warns of Danger Ahead" . The New York Times . ISSN   0362-4331. Archived from the original on 1 May 2023. Retrieved 1 May 2023.
  32. 1 2 3 4 Jacobson, Dana (host); Silva-Braga, Brook (reporter); Frost, Nick; Hinton, Geoffrey (guests) (25 March 2023). "'Godfather of artificial intelligence' talks impact and potential of new AI". CBS Saturday Morning . Season 12. Episode 12. New York City: CBS News. Archived from the original on 28 March 2023. Retrieved 28 March 2023.
  33. Erlichman, Jon (14 June 2024). "'50-50 chance' that AI outsmarts humanity, Geoffrey Hinton says". BNN Bloomberg. Retrieved 11 October 2024.
  34. Hetzner, Christiaan. "New Nobel Prize winner, AI godfather Geoffrey Hinton, says he's proud his student fired OpenAI boss Sam Altman". Fortune. Retrieved 11 October 2024.
  35. Coates, Jessica (9 October 2024). "Geoffrey Hinton warns of AI's growing danger after Nobel Prize win". The Independent. Retrieved 11 October 2024.
  36. Onstad, Katrina (29 January 2018). "Mr. Robot". Toronto Life. Retrieved 24 December 2023.
  37. 1 2 Geoffrey Hinton at the Mathematics Genealogy Project
  38. Hinton, Geoffrey Everest (1977). Relaxation and its role in vision. Edinburgh Research Archive (PhD thesis). University of Edinburgh. hdl:1842/8121. OCLC   18656113. EThOS   uk.bl.ethos.482889. Archived from the original on 30 March 2023. Retrieved 30 March 2023. Lock-green.svg
  39. 1 2 3 Smith, Craig S. (23 June 2017). "The Man Who Helped Turn Toronto into a High-Tech Hotbed". The New York Times . Archived from the original on 27 January 2020. Retrieved 27 June 2017.
  40. Hinton, Geoffrey E. (6 January 2020). "Curriculum Vitae" (PDF). University of Toronto: Department of Computer Science. Archived (PDF) from the original on 23 July 2020. Retrieved 30 November 2016.
  41. "University of Toronto". discover.research.utoronto.ca. Retrieved 9 October 2024.
  42. "How Canada has emerged as a leader in artificial intelligence". University Affairs. Retrieved 9 October 2024.
  43. "Geoffrey Hinton Biography". CIFAR. Retrieved 8 October 2024.
  44. "Geoffrey E Hinton - A.M. Turing Award Laureate". amturing.acm.org. Retrieved 9 October 2024.
  45. "2018 ACM A.M. Turing Award Laureates". awards.acm.org. Retrieved 9 October 2024.
  46. "CIFAR - Learning in Machines & Brains". CIFAR. Retrieved 8 October 2024.
  47. "Neural Networks for Machine Learning". University of Toronto. Archived from the original on 31 December 2016. Retrieved 30 December 2016.
  48. "U of T neural networks start-up acquired by Google" (Press release). Toronto, ON. 12 March 2013. Archived from the original on 8 October 2019. Retrieved 13 March 2013.
  49. Geoffrey Hinton publications indexed by the Scopus bibliographic database. (subscription required)
  50. Ford, Martin (2018). Architects of Intelligence: The truth about AI from the people building it. Packt Publishing. ISBN   978-1-78913-151-2.
  51. Ackley, David H; Hinton Geoffrey E; Sejnowski, Terrence J (1985), "A learning algorithm for Boltzmann machines", Cognitive science, Elsevier, 9 (1): 147–169
  52. Hinton, Geoffrey E. "Geoffrey E. Hinton's Publications in Reverse Chronological Order". Archived from the original on 18 April 2020. Retrieved 15 September 2010.
  53. "Stories by Geoffrey E. Hinton in Scientific American". Scientific American . Archived from the original on 17 October 2019. Retrieved 17 October 2019.
  54. Memisevic, Roland; Hinton, Geoffrey (2006). "Unsupervised Learning of Image Transformations" (PDF). IEEE CVPR.
  55. "An Introduction to t-SNE with Python Example". KDNuggets. Retrieved 22 June 2024.
  56. van der Maaten, Laurens; Hinton, Geoffrey (2008). "Visualizing Data using t-SNE" (PDF). Journal of Machine Learning Research.
  57. Svabour, Sara; Frosst, Nicholas; Hinton, Geoffrey E. (2017). "Dynamic Routing Between Capsules". arXiv: 1710.09829 [cs.CV].
  58. "Matrix capsules with EM routing". OpenReview. Archived from the original on 10 June 2019. Retrieved 8 November 2017.
  59. Geib, Claudia (11 February 2017). "We've finally created an AI network that's been decades in the making". Futurism. Archived from the original on 9 November 2017. Retrieved 3 May 2023.
  60. Hinton, Geoffrey (2022). "The Forward-Forward Algorithm: Some Preliminary Investigations". arXiv: 2212.13345 [cs.LG].
  61. "Hinton's Forward Forward Algorithm is the New Way Ahead for Neural Networks". Analytics India Magazine. 16 December 2022. Retrieved 22 June 2024.
  62. 1 2 3 4 5 Geoffrey Hinton. "Geoffrey Hinton's postdocs". University of Toronto. Archived from the original on 29 October 2020. Retrieved 11 September 2020.
  63. "Yann LeCun's Research and Contributions". yann.lecun.com. Archived from the original on 3 March 2018. Retrieved 13 March 2018.
  64. "A conversation with Renaissance Technologies CEO Peter Brown". goldmansachs.com. Retrieved 21 October 2024.
  65. "Professor Geoffrey Hinton FRS". Royal Society . London. 1998. Archived from the original on 3 November 2015. One or more of the preceding sentences incorporates text from the royalsociety.org website where:
    "All text published under the heading 'Biography' on Fellow profile pages is available under Creative Commons Attribution 4.0 International License." -- "Royal Society Terms, conditions and policies". Archived from the original on 11 November 2016. Retrieved 9 March 2016.
  66. "Current and Previous Recipients". The David E. Rumelhart Prize. Archived from the original on 2 March 2017.
  67. "Certificate of election EC/1998/21: Geoffrey Everest Hinton". Royal Society . London. 1998. Archived from the original on 5 May 2017.
  68. "Distinguished Edinburgh graduate receives ACM A.M. Turing Award". The University of Edinburgh. 2 April 2019. Archived from the original on 14 July 2019. Retrieved 9 April 2019.
  69. "IJCAI-22 Award for Research Excellence". International Joint Conference on Artificial Intelligence . Archived from the original on 20 December 2020. Retrieved 5 August 2021.
  70. "Artificial intelligence scientist gets M prize". CBC News . 14 February 2011. Archived from the original on 17 February 2011. Retrieved 14 February 2011.
  71. "Geoffrey Hinton, keystone researcher in artificial intelligence, visits the Université de Sherbrooke". Université de Sherbrooke . 19 February 2014. Archived from the original on 21 February 2021.
  72. "National Academy of Engineering Elects 80 Members and 22 Foreign Members". National Academy of Engineering . 8 February 2016. Archived from the original on 13 May 2018. Retrieved 13 February 2016.
  73. "2016 IEEE Medals and Recognitions Recipients and Citations" (PDF). Institute of Electrical and Electronics Engineers . Archived from the original (PDF) on 14 November 2016. Retrieved 7 July 2016.
  74. "The BBVA Foundation bestows its award on the architect of the first machines capable of learning the way people do". BBVA Foundation. 17 January 2017. Archived from the original on 4 December 2020. Retrieved 21 February 2021.
  75. "Vector Institutes Chief Scientific Advisor Dr.Geoffrey Hinton Receives ACM A.M. Turing Award Alongside Dr.Yoshua Bengio and Dr.Yann Lecun". Vector Institute for Artificial Intelligence. 27 March 2019. Archived from the original on 27 March 2019. Retrieved 27 March 2019.
  76. Metz, Cade (27 March 2019). "Three Pioneers in Artificial Intelligence Win Turing Award". The New York Times . Archived from the original on 27 March 2019. Retrieved 27 March 2019.
  77. "Fathers of the Deep Learning Revolution Receive ACM A.M. Turing Award – Bengio, Hinton and LeCun Ushered in Major Breakthroughs in Artificial Intelligence". Association for Computing Machinery . 27 March 2019. Archived from the original on 27 March 2019. Retrieved 27 March 2019.
  78. "Governor General Announces 103 New Appointments to the Order of Canada, December 2018". The Governor General of Canada . 27 December 2018. Archived from the original on 19 November 2019. Retrieved 7 June 2020.
  79. Dickson Prize 2021
  80. "Geoffrey Hinton, Yann LeCun, Yoshua Bengio and Demis Hassabis – Laureates – Princess of Asturias Awards". Princess of Asturias Awards . 2022. Archived from the original on 15 June 2022. Retrieved 3 May 2023.
  81. "Geoffrey E Hinton". awards.acm.org. Retrieved 26 January 2024.
  82. ScholarGPS Profile: Geoffrey E. Hinton
  83. Nobel Prize (8 October 2024). Announcement of the 2024 Nobel Prize in Physics . Retrieved 8 October 2024 via YouTube.
  84. Metz, Cade (8 October 2024). "How Does It Feel to Win a Nobel Prize? Ask the 'Godfather of A.I.'". The New York Times. Retrieved 10 October 2024.
  85. Hinton, Geoffrey [@geoffreyhinton] (1 May 2023). "In the NYT today, Cade Metz implies that I left Google so that I could criticize Google. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly" (Tweet). Retrieved 2 May 2023 via Twitter.
  86. 1 2 Kleinman, Zoe; Vallance, Chris (2 May 2023). "AI 'godfather' Geoffrey Hinton warns of dangers as he quits Google". BBC News . Archived from the original on 2 May 2023. Retrieved 2 May 2023.
  87. Hinton, Geoffrey (25 March 2023). "Full interview: 'Godfather of artificial intelligence' talks impact and potential of AI" (Interview). Interviewed by Silva-Braga, Brook. New York City: CBS News. Event occurs at 31:45. Archived from the original on 2 May 2023 via YouTube. Excerpts were broadcast in Jacobson & Silva-Braga (2023), but the full interview was only published online.
  88. Hinton & Silva-Braga 2023, 31:55.
  89. Hinton & Silva-Braga 2023, 35:48.
  90. "Call for an International Ban on the Weaponization of Artificial Intelligence". University of Ottawa: Centre for Law, Technology and Society. Archived from the original on 8 April 2023. Retrieved 1 May 2023.
  91. 1 2 Wiggers, Kyle (17 December 2018). "Geoffrey Hinton and Demis Hassabis: AGI is nowhere close to being a reality". VentureBeat . Archived from the original on 21 July 2022. Retrieved 21 July 2022.
  92. "AI 'godfather' says universal basic income will be needed". www.bbc.com. Retrieved 15 June 2024.
  93. Varanasi, Lakshmi (18 May 2024). "AI 'godfather' Geoffrey Hinton says he's 'very worried' about AI taking jobs and has advised the British government to adopt a universal basic income". Business Insider Africa. Retrieved 15 June 2024.
  94. Pillay, Tharin; Booth, Harry (7 August 2024). "Exclusive: Renowned Experts Pen Support for California's Landmark AI Safety Bill". TIME. Retrieved 21 August 2024.
  95. "Letter from renowned AI experts". SB 1047 – Safe & Secure AI Innovation. Retrieved 21 August 2024.
  96. Rothman, Joshua (13 November 2023). "Why the Godfather of A.I. Fears What He's Built". The New Yorker . Archived from the original on 25 August 2024. Retrieved 27 November 2023.
  97. Scanlan, Chip (6 June 2024). "How a reporter prepped to understand A.I. and the man who helped invent it". Nieman Foundation (Has the full 2023 New Yorker article with annotations). Retrieved 26 October 2024.
  98. Martin, Alexander (18 March 2021). "Geoffrey Hinton: The story of the British 'Godfather of AI' – who's not sat down since 2005". Sky News . Archived from the original on 19 March 2021. Retrieved 7 April 2021.
  99. Roberts, Siohan (27 March 2004). "The Isaac Newton of logic". The Globe and Mail . Archived from the original on 3 May 2023. Retrieved 3 May 2023.
  100. Salt, George (1978). "Howard Everest Hinton. 24 August 1912-2 August 1977". Biographical Memoirs of Fellows of the Royal Society . 24: 150–182. doi:10.1098/rsbm.1978.0006. ISSN   0080-4606. S2CID   73278532.
  101. Shute, Joe (26 August 2017). "The 'Godfather of AI' on making machines clever and whether robots really will learn to kill us all?". The Daily Telegraph . Archived from the original on 27 December 2017. Retrieved 20 December 2017.

Further reading