Google AI

Last updated

Google AI
Company type Division
Industry Artificial intelligence
Founded2017;7 years ago (2017)
Parent Google
Website ai.google

Google AI is a division of Google dedicated to artificial intelligence. [1] It was announced at Google I/O 2017 by CEO Sundar Pichai. [2]

Contents

This division has expanded its reach with research facilities in various parts of the world such as Zurich, Paris, Israel, and Beijing. [3] In 2023, Google AI was part of the reorganization initiative that elevated its head, Jeff Dean, to the position of chief scientist at Google. [4] This reorganization involved the merging of Google Brain and DeepMind, a UK-based company that Google acquired in 2014 that operated separately from the company's core research. [5] This division is predicted to rise in value and performance as AI becomes more mainstream, since Google is already an AI powerhouse. [6]

Projects

Former

Related Research Articles

Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software which enable machines to perceive their environment and uses learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.

<span class="mw-page-title-main">Chatbot</span> Program that simulates conversation

A chatbot is a software application or web interface that is designed to mimic human conversation through text or voice interactions. Modern chatbots are typically online and use generative artificial intelligence systems that are capable of maintaining a conversation with a user in natural language and simulating the way a human would behave as a conversational partner. Such chatbots often use deep learning and natural language processing, but simpler chatbots have existed for decades.

Google Brain was a deep learning artificial intelligence research team under the umbrella of Google AI, a research division at Google dedicated to artificial intelligence. Formed in 2011, it combined open-ended machine learning research with information systems and large-scale computing resources. It created tools such as TensorFlow, which allow neural networks to be used by the public, and multiple internal AI research projects, and aimed to create research opportunities in machine learning and natural language processing. It was merged into former Google sister company DeepMind to form Google DeepMind in April 2023.

<span class="mw-page-title-main">Google DeepMind</span> Artificial intelligence division

DeepMind Technologies Limited, doing business as Google DeepMind, is a British-American artificial intelligence research laboratory which serves as a subsidiary of Google. Founded in the UK in 2010, it was acquired by Google in 2014. The company is based in London, with research centres in Canada, France, Germany, and the United States.

Google Cloud Platform (GCP), offered by Google, is a suite of cloud computing services that provides a series of modular cloud services including computing, data storage, data analytics, and machine learning, alongside a set of management tools. It runs on the same infrastructure that Google uses internally for its end-user products, such as Google Search, Gmail, and Google Docs, according to Verma, et.al. Registration requires a credit card or bank account details.

<span class="mw-page-title-main">TensorFlow</span> Machine learning software library

TensorFlow is a free and open-source software library for machine learning and artificial intelligence. It can be used across a range of tasks but has a particular focus on training and inference of deep neural networks.

<span class="mw-page-title-main">Tensor Processing Unit</span> AI accelerator ASIC by Google

Tensor Processing Unit (TPU) is an AI accelerator application-specific integrated circuit (ASIC) developed by Google for neural network machine learning, using Google's own TensorFlow software. Google began using TPUs internally in 2015, and in 2018 made them available for third-party use, both as part of its cloud infrastructure and by offering a smaller version of the chip for sale.

An AI accelerator, deep learning processor, or neural processing unit (NPU) is a class of specialized hardware accelerator or computer system designed to accelerate artificial intelligence and machine learning applications, including artificial neural networks and machine vision. Typical applications include algorithms for robotics, Internet of Things, and other data-intensive or sensor-driven tasks. They are often manycore designs and generally focus on low-precision arithmetic, novel dataflow architectures or in-memory computing capability. As of 2024, a typical AI integrated circuit chip contains tens of billions of MOSFETs.

<span class="mw-page-title-main">Keras</span> Neural network library

Keras is an open-source library that provides a Python interface for artificial neural networks. Keras was first independent software, then integrated into TensorFlow library, and later supporting more. "Keras 3 is a full rewrite of Keras [can be used] as a low-level cross-framework language to develop custom components such as layers, models, or metrics that can be used in native workflows in JAX, TensorFlow, or PyTorch — with one codebase." Keras 3 will be the default Keras version for TensorFlow 2.16 onwards, but Kerals 2 can still be used.

The bfloat16 floating-point format is a computer number format occupying 16 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point. This format is a shortened (16-bit) version of the 32-bit IEEE 754 single-precision floating-point format (binary32) with the intent of accelerating machine learning and near-sensor computing. It preserves the approximate dynamic range of 32-bit floating-point numbers by retaining 8 exponent bits, but supports only an 8-bit precision rather than the 24-bit significand of the binary32 format. More so than single-precision 32-bit floating-point numbers, bfloat16 numbers are unsuitable for integer calculations, but this is not their intended use. Bfloat16 is used to reduce the storage requirements and increase the calculation speed of machine learning algorithms.

<span class="mw-page-title-main">Flux (machine-learning framework)</span> Open-source machine-learning software library

Flux is an open-source machine-learning software library and ecosystem written in Julia. Its current stable release is v0.14.5 . It has a layer-stacking-based interface for simpler models, and has a strong support on interoperability with other Julia packages instead of a monolithic design. For example, GPU support is implemented transparently by CuArrays.jl. This is in contrast to some other machine learning frameworks which are implemented in other languages with Julia bindings, such as TensorFlow.jl, and thus are more limited by the functionality present in the underlying implementation, which is often in C or C++. Flux joined NumFOCUS as an affiliated project in December of 2021.

Specialized computer hardware is often used to execute artificial intelligence (AI) programs faster, and with less energy, such as Lisp machines, neuromorphic engineering, event cameras, and physical neural networks. As of 2023, the market for AI hardware is dominated by GPUs.

Cohere is a Canadian multinational technology company focused on artificial intelligence for the enterprise, specializing in large language models. Cohere was founded in 2019 by Aidan Gomez, Ivan Zhang, and Nick Frosst, and is headquartered in Toronto and San Francisco, with offices in Palo Alto and London.

LaMDA is a family of conversational large language models developed by Google. Originally developed and introduced as Meena in 2020, the first-generation LaMDA was announced during the 2021 Google I/O keynote, while the second generation was announced the following year. In June 2022, LaMDA gained widespread attention when Google engineer Blake Lemoine made claims that the chatbot had become sentient. The scientific community has largely rejected Lemoine's claims, though it has led to conversations about the efficacy of the Turing test, which measures whether a computer can pass for a human. In February 2023, Google announced Bard, a conversational artificial intelligence chatbot powered by LaMDA, to counter the rise of OpenAI's ChatGPT.

Hugging Face, Inc. is a French-American company based in New York City that develops computation tools for building applications using machine learning. It is most notable for its transformers library built for natural language processing applications and its platform that allows users to share machine learning models and datasets and showcase their work.

Sparrow is a chatbot developed by the artificial intelligence research lab DeepMind, a subsidiary of Alphabet Inc. It is designed to answer users' questions correctly, while reducing the risk of unsafe and inappropriate answers. One motivation behind Sparrow is to address the problem of language models producing incorrect, biased or potentially harmful outputs. Sparrow is trained using human judgements, in order to be more “Helpful, Correct and Harmless” compared to baseline pre-trained language models. The development of Sparrow involved asking paid study participants to interact with Sparrow, and collecting their preferences to train a model of how useful an answer is.

<span class="mw-page-title-main">AI boom</span> Ongoing period of rapid progress in artificial intelligence

The AI boom, or AI spring, is an ongoing period of rapid progress in the field of artificial intelligence (AI). Prominent examples include protein folding prediction led by Google DeepMind and generative AI led by OpenAI.

<span class="mw-page-title-main">PaLM</span> Large language model developed by Google

PaLM is a 540 billion parameter transformer-based large language model developed by Google AI. Researchers also trained smaller versions of PaLM, 8 and 62 billion parameter models, to test the effects of model scale.

<span class="mw-page-title-main">Gemini (language model)</span> Large language model developed by Google

Gemini is a family of multimodal large language models developed by Google DeepMind, serving as the successor to LaMDA and PaLM 2. Comprising Gemini Ultra, Gemini Pro, and Gemini Nano, it was announced on December 6, 2023, positioned as a competitor to OpenAI's GPT-4. It powers the chatbot of the same name.

References

  1. Jhonsa, Eric (May 18, 2017). "Google Has an AI Lead and Is Putting It to Good Use". TheStreet.com . Archived from the original on August 2, 2017. Retrieved May 19, 2017.
  2. "Google I/O'17: Google Keynote". YouTube . Google Developers. Archived from the original on July 20, 2023. Retrieved May 18, 2017.
  3. Daim, Tugrul U.; Meissner, Dirk (2020). Innovation Management in the Intelligent World: Cases and Tools. Cham, Switzerland: Springer Nature. pp. 57–58. ISBN   978-3-030-58300-2.
  4. Bergen, Mark; Alba, Davey (January 20, 2023). "Google's Treasured AI Unit Gets Swept Up in 12,000 Job Cuts". Bloomberg.com. Archived from the original on February 13, 2023. Retrieved June 22, 2023.
  5. Elias, Jennifer (April 20, 2023). "Read the internal memo Alphabet sent in merging A.I.-focused groups DeepMind and Google Brain". CNBC. Archived from the original on June 22, 2023. Retrieved June 22, 2023.
  6. Elias, Jennifer (July 26, 2023). "Google points to many ways it can win in A.I. even as online ad market shows cracks". CNBC. Retrieved November 13, 2023.
  7. Bergen, Mark (May 17, 2017). "Google to Offer New AI 'Supercomputer' Chip Via Cloud". Bloomberg News . Archived from the original on May 23, 2022. Retrieved May 19, 2017.
  8. Vanian, Jonathan (May 17, 2017). "Google Hopes This New Technology Will Make Artificial Intelligence Smarter". Fortune . Archived from the original on February 6, 2023. Retrieved May 19, 2017.
  9. "TPU Research Cloud". sites.research.google. Archived from the original on February 6, 2023. Retrieved June 13, 2022.
  10. "TensorFlow – Google.ai". Google.ai. Archived from the original on July 19, 2023. Retrieved May 21, 2017.
  11. "Magenta". Magenta.tensorflow.org. Archived from the original on February 9, 2023. Retrieved February 19, 2019.
  12. "tenorflow/magenta". github.com. Archived from the original on April 13, 2020. Retrieved February 19, 2019.
  13. "Google Magenta AI – Music Creation". DaayaLab. March 18, 2023. Archived from the original on March 21, 2023. Retrieved March 21, 2023.
  14. "Quantum Supremacy Using a Programmable Superconducting Processor". Google AI Blog. Archived from the original on October 24, 2019. Retrieved April 1, 2020.
  15. Condon, Stephanie (May 18, 2021). "Google I/O 2021: Google unveils new conversational language model, LaMDA". ZDNet . Archived from the original on May 18, 2021. Retrieved June 12, 2022.
  16. Butryna, Alena; Chu, Shan Hui Cathy; Demirsahin, Isin; Gutkin, Alexander; Ha, Linne; He, Fei; Jansche, Martin; Johny, Cibu C.; Katanova, Anna; Kjartansson, Oddur; Li, Chen Fang; Sarin, Supheakmungkol; Oo, Yin May; Pipatsrisawat, Knot; Rivera, Clara E. (2019). "Google Crowdsourced Speech Corpora and Related Open-Source Resources for Low-Resource Languages and Dialects: An Overview" (PDF). 2019 UNESCO International Conference Language Technologies for All (LT4All): Enabling Linguistic Diversity and Multilingualism Worldwide. 4–6 December, Paris, France: 91–94. arXiv: 2010.06778 . Archived (PDF) from the original on January 22, 2023. Retrieved January 22, 2023.{{cite journal}}: CS1 maint: location (link)
  17. Madden, Michael G. (December 15, 2023). "Google's Gemini: is the new AI model really better than ChatGPT?". The Conversation. Retrieved April 14, 2024.
  18. Foster, Megan. "What is Google Duet AI and how to use it in presentation slides". slidefill.com. Retrieved March 18, 2023.

Further reading