Conference on Neural Information Processing Systems

Last updated
Conference on Neural Information Processing Systems
Logo for Conference on Neural Information Processing Systems.svg
AbbreviationNeurIPS (formerly NIPS)
Discipline Machine learning, statistics, artificial intelligence, computational neuroscience
Publication details
History1987–present
FrequencyAnnual
Website neurips.cc

The Conference and Workshop on Neural Information Processing Systems (abbreviated as NeurIPS and formerly NIPS) is a machine learning and computational neuroscience conference held every December. The conference is currently a double-track meeting (single-track until 2015) that includes invited talks as well as oral and poster presentations of refereed papers, followed by parallel-track workshops that up to 2013 were held at ski resorts.

Contents

History

The NeurIPS meeting was first proposed in 1986 at the annual invitation-only Snowbird Meeting on Neural Networks for Computing organized by The California Institute of Technology and Bell Laboratories. NeurIPS was designed as a complementary open interdisciplinary meeting for researchers exploring biological and artificial Neural Networks. Reflecting this multidisciplinary approach, NeurIPS began in 1987 with information theorist Ed Posner as the conference president and learning theorist Yaser Abu-Mostafa as program chairman. [1] Research presented in the early NeurIPS meetings included a wide range of topics from efforts to solve purely engineering problems to the use of computer models as a tool for understanding biological nervous systems. Since then, the biological and artificial systems research streams have diverged, and recent NeurIPS proceedings have been dominated by papers on machine learning, artificial intelligence and statistics.

From 1987 until 2000 NeurIPS was held in Denver, United States. Since then, the conference was held in Vancouver, Canada (2001–2010), Granada, Spain (2011), and Lake Tahoe, United States (2012–2013). In 2014 and 2015, the conference was held in Montreal, Canada, in Barcelona, Spain in 2016, in Long Beach, United States in 2017, in Montreal, Canada in 2018 and Vancouver, Canada in 2019. Reflecting its origins at Snowbird, Utah, the meeting was accompanied by workshops organized at a nearby ski resort up until 2013, when it outgrew ski resorts.

The first NeurIPS Conference was sponsored by the IEEE. [2] The following NeurIPS Conferences have been organized by the NeurIPS Foundation, established by Ed Posner. Terrence Sejnowski has been the president of the NeurIPS Foundation since Posner's death in 1993. The board of trustees consists of previous general chairs of the NeurIPS Conference.

The first proceedings was published in book form by the American Institute of Physics in 1987, and was entitled Neural Information Processing Systems, [3] then the proceedings from the following conferences have been published by Morgan Kaufmann (1988–1993), MIT Press (1994–2004) and Curran Associates (2005–present) under the name Advances in Neural Information Processing Systems.

The conference was originally abbreviated as "NIPS". By 2018 a few commentators were criticizing the abbreviation as encouraging sexism due to its association with the word nipples , and as being a slur against Japanese. The board changed the abbreviation to "NeurIPS" in November 2018. [4]

Topics

Judea Pearl at his poster at the 2013 Conference on Neural Information Processing Systems. Judea Pearl at NIPS 2013 (11781981594).jpg
Judea Pearl at his poster at the 2013 Conference on Neural Information Processing Systems.

Along with machine learning and neuroscience, other fields represented at NeurIPS include cognitive science, psychology, computer vision, statistical linguistics, and information theory. Over the years, NeurIPS became a premier conference on machine learning and although the 'Neural' in the NeurIPS acronym had become something of a historical relic, the resurgence of deep learning [5] in neural networks since 2012, fueled by faster computers and big data, has led to achievements in speech recognition, object recognition in images, image captioning, language translation and world championship performance in the game of Go, based on neural architectures inspired by the hierarchy of areas in the visual cortex (ConvNet) and reinforcement learning inspired by the basal ganglia (Temporal difference learning).

Notable affinity groups have emerged from the NeurIPS conference and displayed diversity, including Black in AI (in 2017), Queer in AI (in 2016), and others. [6] [7]

Named lectures

In addition to invited talks and symposia, NeurIPS also organizes two named lectureships to recognize distinguished researchers. The NeurIPS Board introduced the Posner Lectureship in honor of NeurIPS founder Ed Posner; two Posner Lectures were given each year up to 2015. [8] Past lecturers have included:

In 2015, the NeurIPS Board introduced the Breiman Lectureship to highlight work in statistics relevant to conference topics. The lectureship was named for statistician Leo Breiman, who served on the NeurIPS Board from 1994 to 2005. [9] Past lecturers have included:

NIPS experiment

In NIPS 2014, the program chairs duplicated 10% of all submissions and sent them through separate reviewers to evaluate randomness in the reviewing process. [10] Several researchers interpreted the result. [11] [12] Regarding whether the decision in NIPS is completely random or not, John Langford writes: "Clearly not—a purely random decision would have arbitrariness of ~78%. It is, however, quite notable that 60% is much closer to 78% than 0%." He concludes that the result of the reviewing process is mostly arbitrary. [13]

Locations

See also

Notes

  1. The first NeurIPS
  2. Sponsors of the first NeurIPS
  3. The first NeurIPS Proceedings
  4. Else, Holly (19 November 2018). "AI conference widely known as 'NIPS' changes its controversial acronym". Nature News. doi:10.1038/d41586-018-07476-w . Retrieved 17 February 2021.
  5. The Deep Learning Revolution. MIT Press. October 2018. ISBN   9780262038034 . Retrieved 30 April 2020.
  6. "How one conference embraced diversity". Nature. 564 (7735): 161–162. 2018-12-12. doi: 10.1038/d41586-018-07718-x . PMID   31123357. S2CID   54481549.
  7. "Why you can't just take pictures at the Queer in AI workshop at NeurIPS". VentureBeat. 2019-12-10. Retrieved 2021-12-22.
  8. "24th Annual Conference on Neural Information Processing Systems (NIPS), Vancouver 2010 - VideoLectures - VideoLectures.NET". videolectures.net. Retrieved 17 July 2017.
  9. NIPS 2015 Conference (PDF). Neural Information Processing Systems Foundation. 7 December 2015. p. 10. Retrieved 17 July 2017.
  10. Lawrence, Neil (2014-12-16). "The NIPS Experiment". Inverse Probability. Archived from the original on 2015-04-03. Retrieved 2015-03-31.
  11. Fortnow, Lance (2014-12-18). "The NIPS Experiment". Computational Complexity. Retrieved 2015-03-31.
  12. Hardt, Moritz (2014-12-15). "The NIPS Experiment". Moody Rd. Retrieved 2015-03-31.
  13. Langford, John (2015-03-09). "The NIPS Experiment". Communications of the ACM. Retrieved 2015-03-31.
  14. Nips.cc - 2016 Conference
  15. Nips.cc - 2017 Conference
  16. Nips.cc - 2018 Conference
  17. 1 2 "Vancouver Named NeurIPS 2019 & 2020 Host as Visa Issues Continue to Plague the AI Conference". 5 December 2018.
  18. Nips.cc - 2022 Conference
  19. "NeurIPS | 2023".

Related Research Articles

<span class="mw-page-title-main">Artificial neural network</span> Computational model used in machine learning, based on connected, hierarchical functions

Artificial neural networks are a branch of machine learning models that are built using principles of neuronal organization discovered by connectionism in the biological neural networks constituting animal brains.

<span class="mw-page-title-main">Geoffrey Hinton</span> British-Canadian computer scientist and psychologist (born 1947)

Geoffrey Everest Hinton is a British-Canadian computer scientist and cognitive psychologist, most noted for his work on artificial neural networks. From 2013 to 2023, he divided his time working for Google and the University of Toronto, before publicly announcing his departure from Google in May 2023, citing concerns about the risks of artificial intelligence (AI) technology. In 2017, he co-founded and became the chief scientific advisor of the Vector Institute in Toronto.

Terrence Joseph Sejnowski is the Francis Crick Professor at the Salk Institute for Biological Studies where he directs the Computational Neurobiology Laboratory and is the director of the Crick-Jacobs center for theoretical and computational biology. He has performed pioneering research in neural networks and computational neuroscience.

Transfer learning (TL) is a technique in machine learning (ML) in which knowledge learned from a task is re-used in order to boost performance on a related task. For example, for image classification, knowledge gained while learning to recognize cars could be applied when trying to recognize trucks. This topic is related to the psychological literature on transfer of learning, although practical ties between the two fields are limited. Reusing/transferring information from previously learned tasks to new tasks has the potential to significantly improve learning efficiency.

<span class="mw-page-title-main">International Conference on Machine Learning</span> Academic conference in machine learning

The International Conference on Machine Learning (ICML) is the leading international academic conference in machine learning. Along with NeurIPS and ICLR, it is one of the three primary conferences of high impact in machine learning and artificial intelligence research. It is supported by the (IMLS). Precise dates vary year to year, but paper submissions are generally due at the end of January, and the conference is generally held the following July. The first ICML was held 1980 in Pittsburgh.

<span class="mw-page-title-main">Word embedding</span> Method in natural language processing

In natural language processing (NLP), a word embedding is a representation of a word. The embedding is used in text analysis. Typically, the representation is a real-valued vector that encodes the meaning of the word in such a way that the words that are closer in the vector space are expected to be similar in meaning. Word embeddings can be obtained using language modeling and feature learning techniques, where words or phrases from the vocabulary are mapped to vectors of real numbers.

Yee-Whye Teh is a professor of statistical machine learning in the Department of Statistics, University of Oxford. Prior to 2012 he was a reader at the Gatsby Charitable Foundation computational neuroscience unit at University College London. His work is primarily in machine learning, artificial intelligence, statistics and computer science.

Geoffrey J. Gordon is a professor at the Machine Learning Department at Carnegie Mellon University in Pittsburgh and director of research at the Microsoft Montréal lab. He is known for his research in statistical relational learning and on anytime dynamic variants of the A* search algorithm. His research interests include multi-agent planning, reinforcement learning, decision-theoretic planning, statistical models of difficult data, computational learning theory, and game theory.

<span class="mw-page-title-main">International Conference on Learning Representations</span> Academic conference in machine learning

The International Conference on Learning Representations (ICLR) is a machine learning conference typically held in late April or early May each year. The conference includes invited talks as well as oral and poster presentations of refereed papers. Since its inception in 2013, ICLR has employed an open peer review process to referee paper submissions. In 2019, there were 1591 paper submissions, of which 500 accepted with poster presentations (31%) and 24 with oral presentations (1.5%).. In 2021, there were 2997 paper submissions, of which 860 were accepted (29%)..

<span class="mw-page-title-main">Deep reinforcement learning</span> Machine learning that combines deep learning and reinforcement learning

Deep reinforcement learning is a subfield of machine learning that combines reinforcement learning (RL) and deep learning. RL considers the problem of a computational agent learning to make decisions by trial and error. Deep RL incorporates deep learning into the solution, allowing agents to make decisions from unstructured input data without manual engineering of the state space. Deep RL algorithms are able to take in very large inputs and decide what actions to perform to optimize an objective. Deep reinforcement learning has been used for a diverse set of applications including but not limited to robotics, video games, natural language processing, computer vision, education, transportation, finance and healthcare.

Zero-shot learning (ZSL) is a problem setup in deep learning where, at test time, a learner observes samples from classes which were not observed during training, and needs to predict the class that they belong to. Zero-shot methods generally work by associating observed and non-observed classes through some form of auxiliary information, which encodes observable distinguishing properties of objects. For example, given a set of images of animals to be classified, along with auxiliary textual descriptions of what animals look like, an artificial intelligence model which has been trained to recognize horses, but has never been given a zebra, can still recognize a zebra when it also knows that zebras look like striped horses. This problem is widely studied in computer vision, natural language processing, and machine perception.

Isabelle Guyon is a French-born researcher in machine learning known for her work on support-vector machines, artificial neural networks and bioinformatics. She is a Chair Professor at the University of Paris-Saclay.

<span class="mw-page-title-main">Hanna Wallach</span> Computational social scientist

Hanna Wallach is a computational social scientist and partner research manager at Microsoft Research. Her work makes use of machine learning models to study the dynamics of social processes. Her current research focuses on issues of fairness, accountability, transparency, and ethics as they relate to AI and machine learning.

<span class="mw-page-title-main">Black in AI</span> Technology research organization

Black in AI, formally called the Black in AI Workshop, is a technology research organization and affinity group, founded by computer scientists Timnit Gebru and Rediet Abebe in 2017. It started as a conference workshop, later pivoting into an organization. Black in AI increases the presence and inclusion of Black people in the field of artificial intelligence (AI) by creating space for sharing ideas, fostering collaborations, mentorship, and advocacy.

Samy Bengio is a Canadian computer scientist, Senior Director of AI and Machine Learning Research at Apple, and a former long-time scientist at Google known for leading a large group of researchers working in machine learning including adversarial settings. Bengio left Google shortly after the company fired his report, Timnit Gebru, without first notifying him. At the time, Bengio said that he had been "stunned" by what happened to Gebru. He is also among the three authors who developed Torch in 2002, the ancestor of PyTorch, one of today's two largest machine learning frameworks.

<span class="mw-page-title-main">François Chollet</span> Machine learning researcher

François Chollet is a French software engineer and artificial intelligence researcher currently working at Google. Chollet is the creator of the Keras deep-learning library, released in 2015, and a main contributor to the TensorFlow machine learning framework. His research focuses on computer vision, the application of machine learning to formal reasoning, abstraction, and how to achieve greater generality in artificial intelligence.

<span class="mw-page-title-main">Jennifer Wortman Vaughan</span> American computer scientist

Jennifer (Jenn) Wortman Vaughan is an American computer scientist and Senior Principal Researcher at Microsoft Research focusing mainly on building responsible artificial intelligence (AI) systems as part of Microsoft's Fairness, Accountability, Transparency, and Ethics in AI (FATE) initiative. Jennifer is also a co-chair of Microsoft's Aether group on transparency that works on operationalizing responsible AI across Microsoft through making recommendations on responsible AI issues, technologies, processes, and best practices. Jennifer is also active in the research community, she served as the workshops chair and the program co-chair of the Conference on Neural Information Processing Systems (NeurIPs) in 2019 and 2021, respectively. She currently serves as Steering Committee member of the Association for Computing Machinery Conference on Fairness, Accountability and Transparency. Jennifer is also a senior advisor to Women in Machine Learning (WiML), an initiative co-founded by Jennifer in 2006 aiming to enhance the experience of women in Machine Learning.

Mi Zhang is a computer scientist at Ohio State University, where he is an Associate Professor of Computer Science and Engineering and the director of AIoT and Machine Learning Systems Lab. He is best known for his work in Edge AI, Artificial Intelligence of Things (AIoT), machine learning systems, and mobile health.

Sébastien Bubeck is a French-American computer scientist and mathematician. He is currently a Senior Principal Research Manager at Microsoft Research in the Machine Learning Foundations group and was formerly professor at Princeton University.