Google AI

Last updated

Google AI
Company type Division
Industry Artificial intelligence
Founded2017;7 years ago (2017)
Parent Google
Website ai.google OOjs UI icon edit-ltr-progressive.svg

Google AI is a division of Google dedicated to artificial intelligence. [1] It was announced at Google I/O 2017 by CEO Sundar Pichai. [2]

Contents

This division has expanded its reach with research facilities in various parts of the world such as Zurich, Paris, Israel, and Beijing. [3] In 2023, Google AI was part of the reorganization initiative that elevated its head, Jeff Dean, to the position of chief scientist at Google. [4] This reorganization involved the merging of Google Brain and DeepMind, a UK-based company that Google acquired in 2014 that operated separately from the company's core research. [5]

In March 2019 Google announced the creation of an Advanced Technology External Advisory Council (ATEAC) comprising eight members: Alessandro Acquisti, Bubacarr Bah, De Kai, Dyan Gibbens, Joanna Bryson, Kay Coles James, Luciano Floridi and William Joseph Burns. Following objections from a large number of Google staff to the appointment of Kay Coles James, the Council was abandoned within one month of its establishment. [6]

Projects

Former

Related Research Articles

Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.

<span class="mw-page-title-main">Chatbot</span> Program that simulates conversation

A chatbot is a software application or web interface designed to have textual or spoken conversations. Modern chatbots are typically online and use generative artificial intelligence systems that are capable of maintaining a conversation with a user in natural language and simulating the way a human would behave as a conversational partner. Such chatbots often use deep learning and natural language processing, but simpler chatbots have existed for decades.

Google Brain was a deep learning artificial intelligence research team that served as the sole AI branch of Google before being incorporated under the newer umbrella of Google AI, a research division at Google dedicated to artificial intelligence. Formed in 2011, it combined open-ended machine learning research with information systems and large-scale computing resources. It created tools such as TensorFlow, which allow neural networks to be used by the public, and multiple internal AI research projects, and aimed to create research opportunities in machine learning and natural language processing. It was merged into former Google sister company DeepMind to form Google DeepMind in April 2023.

DeepMind Technologies Limited, also known by its trade name Google DeepMind, is a British-American artificial intelligence research laboratory which serves as a subsidiary of Google. Founded in the UK in 2010, it was acquired by Google in 2014 and merged with Google AI's Google Brain division to become Google DeepMind in April 2023. The company is based in London, with research centres in Canada, France, Germany, and the United States.

Google Cloud Platform (GCP) is a suite of cloud computing services offered by Google that provides a series of modular cloud services including computing, data storage, data analytics, and machine learning, alongside a set of management tools. It runs on the same infrastructure that Google uses internally for its end-user products, such as Google Search, Gmail, and Google Docs, according to Verma et al. Registration requires a credit card or bank account details.

Eclipse Deeplearning4j is a programming library written in Java for the Java virtual machine (JVM). It is a framework with wide support for deep learning algorithms. Deeplearning4j includes implementations of the restricted Boltzmann machine, deep belief net, deep autoencoder, stacked denoising autoencoder and recursive neural tensor network, word2vec, doc2vec, and GloVe. These algorithms all include distributed parallel versions that integrate with Apache Hadoop and Spark.

<span class="mw-page-title-main">TensorFlow</span> Machine learning software library

TensorFlow is a software library for machine learning and artificial intelligence. It can be used across a range of tasks, but is used mainly for training and inference of neural networks. It is one of the most popular deep learning frameworks, alongside others such as PyTorch and PaddlePaddle. It is free and open-source software released under the Apache License 2.0.

<span class="mw-page-title-main">Tensor Processing Unit</span> AI accelerator ASIC by Google

Tensor Processing Unit (TPU) is an AI accelerator application-specific integrated circuit (ASIC) developed by Google for neural network machine learning, using Google's own TensorFlow software. Google began using TPUs internally in 2015, and in 2018 made them available for third-party use, both as part of its cloud infrastructure and by offering a smaller version of the chip for sale.

An AI accelerator, deep learning processor or neural processing unit (NPU) is a class of specialized hardware accelerator or computer system designed to accelerate artificial intelligence and machine learning applications, including artificial neural networks and computer vision. Typical applications include algorithms for robotics, Internet of Things, and other data-intensive or sensor-driven tasks. They are often manycore designs and generally focus on low-precision arithmetic, novel dataflow architectures or in-memory computing capability. As of 2024, a typical AI integrated circuit chip contains tens of billions of MOSFETs.

<span class="mw-page-title-main">Keras</span> Neural network library

Keras is an open-source library that provides a Python interface for artificial neural networks. Keras was first independent software, then integrated into the TensorFlow library, and later supporting more. "Keras 3 is a full rewrite of Keras [and can be used] as a low-level cross-framework language to develop custom components such as layers, models, or metrics that can be used in native workflows in JAX, TensorFlow, or PyTorch — with one codebase." Keras 3 will be the default Keras version for TensorFlow 2.16 onwards, but Keras 2 can still be used.

PyTorch is a machine learning library based on the Torch library, used for applications such as computer vision and natural language processing, originally developed by Meta AI and now part of the Linux Foundation umbrella. It is one of the most popular deep learning frameworks, alongside others such as TensorFlow and PaddlePaddle, offering free and open-source software released under the modified BSD license. Although the Python interface is more polished and the primary focus of development, PyTorch also has a C++ interface.

In artificial intelligence, researchers teach AI systems to develop their own ways of communicating by having them work together on tasks and use symbols as parts of a new language. These languages might grow out of human languages or be built completely from scratch. When AI is used for translating between languages, it can even create a new shared language to make the process easier. Natural Language Processing (NLP) helps these systems understand and generate human-like language, making it possible for AI to interact and communicate more naturally with people.

The bfloat16 floating-point format is a computer number format occupying 16 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point. This format is a shortened (16-bit) version of the 32-bit IEEE 754 single-precision floating-point format (binary32) with the intent of accelerating machine learning and near-sensor computing. It preserves the approximate dynamic range of 32-bit floating-point numbers by retaining 8 exponent bits, but supports only an 8-bit precision rather than the 24-bit significand of the binary32 format. More so than single-precision 32-bit floating-point numbers, bfloat16 numbers are unsuitable for integer calculations, but this is not their intended use. Bfloat16 is used to reduce the storage requirements and increase the calculation speed of machine learning algorithms.

<span class="mw-page-title-main">Flux (machine-learning framework)</span> Open-source machine-learning software library

Flux is an open-source machine-learning software library and ecosystem written in Julia. Its current stable release is v0.14.5 . It has a layer-stacking-based interface for simpler models, and has a strong support on interoperability with other Julia packages instead of a monolithic design. For example, GPU support is implemented transparently by CuArrays.jl. This is in contrast to some other machine learning frameworks which are implemented in other languages with Julia bindings, such as TensorFlow.jl, and thus are more limited by the functionality present in the underlying implementation, which is often in C or C++. Flux joined NumFOCUS as an affiliated project in December of 2021.

Cohere Inc. is a Canadian multinational technology company focused on artificial intelligence for the enterprise, specializing in large language models. Cohere was founded in 2019 by Aidan Gomez, Ivan Zhang, and Nick Frosst, and is headquartered in Toronto and San Francisco, with offices in Palo Alto, London, and New York City.

<span class="mw-page-title-main">Meta AI</span> Artificial intelligence division of Meta Platforms

Meta AI is a company owned by Meta that develops artificial intelligence and augmented and artificial reality technologies. Meta AI deems itself an academic research laboratory, focused on generating knowledge for the AI community, and should not be confused with Meta's Applied Machine Learning (AML) team, which focuses on the practical applications of its products.

LaMDA is a family of conversational large language models developed by Google. Originally developed and introduced as Meena in 2020, the first-generation LaMDA was announced during the 2021 Google I/O keynote, while the second generation was announced the following year.

Hugging Face, Inc. is an American company incorporated under the Delaware General Corporation Law and based in New York City that develops computation tools for building applications using machine learning. It is most notable for its transformers library built for natural language processing applications and its platform that allows users to share machine learning models and datasets and showcase their work.

<span class="mw-page-title-main">Gemini (language model)</span> Large language model developed by Google

Gemini is a family of multimodal large language models developed by Google DeepMind, serving as the successor to LaMDA and PaLM 2. Comprising Gemini Ultra, Gemini Pro, Gemini Flash, and Gemini Nano, it was announced on December 6, 2023, positioned as a competitor to OpenAI's GPT-4. It powers the chatbot of the same name.

References

  1. Jhonsa, Eric (May 18, 2017). "Google Has an AI Lead and Is Putting It to Good Use". TheStreet.com . Archived from the original on August 2, 2017. Retrieved May 19, 2017.
  2. "Google I/O'17: Google Keynote". YouTube . Google Developers. May 17, 2017. Archived from the original on July 20, 2023. Retrieved May 18, 2017.
  3. Daim, Tugrul U.; Meissner, Dirk (2020). Innovation Management in the Intelligent World: Cases and Tools. Cham, Switzerland: Springer Nature. pp. 57–58. ISBN   978-3-030-58300-2.
  4. Bergen, Mark; Alba, Davey (January 20, 2023). "Google's Treasured AI Unit Gets Swept Up in 12,000 Job Cuts". Bloomberg.com. Archived from the original on February 13, 2023. Retrieved June 22, 2023.
  5. Elias, Jennifer (April 20, 2023). "Read the internal memo Alphabet sent in merging A.I.-focused groups DeepMind and Google Brain". CNBC. Archived from the original on June 22, 2023. Retrieved June 22, 2023.
  6. "Google news release". March 26, 2019. Retrieved July 9, 2024.
  7. Bergen, Mark (May 17, 2017). "Google to Offer New AI 'Supercomputer' Chip Via Cloud". Bloomberg News . Archived from the original on May 23, 2022. Retrieved May 19, 2017.
  8. Vanian, Jonathan (May 17, 2017). "Google Hopes This New Technology Will Make Artificial Intelligence Smarter". Fortune . Archived from the original on February 6, 2023. Retrieved May 19, 2017.
  9. "TPU Research Cloud". sites.research.google. Archived from the original on February 6, 2023. Retrieved June 13, 2022.
  10. "TensorFlow – Google.ai". Google.ai. Archived from the original on July 19, 2023. Retrieved May 21, 2017.
  11. "Magenta". Magenta.tensorflow.org. Archived from the original on February 9, 2023. Retrieved February 19, 2019.
  12. "tenorflow/magenta". github.com. Archived from the original on April 13, 2020. Retrieved February 19, 2019.
  13. "Google Magenta AI – Music Creation". DaayaLab. March 18, 2023. Archived from the original on March 21, 2023. Retrieved March 21, 2023.
  14. "Quantum Supremacy Using a Programmable Superconducting Processor". Google AI Blog. Archived from the original on October 24, 2019. Retrieved April 1, 2020.
  15. Condon, Stephanie (May 18, 2021). "Google I/O 2021: Google unveils new conversational language model, LaMDA". ZDNet . Archived from the original on May 18, 2021. Retrieved June 12, 2022.
  16. Butryna, Alena; Chu, Shan Hui Cathy; Demirsahin, Isin; Gutkin, Alexander; Ha, Linne; He, Fei; Jansche, Martin; Johny, Cibu C.; Katanova, Anna; Kjartansson, Oddur; Li, Chen Fang; Sarin, Supheakmungkol; Oo, Yin May; Pipatsrisawat, Knot; Rivera, Clara E. (2019). "Google Crowdsourced Speech Corpora and Related Open-Source Resources for Low-Resource Languages and Dialects: An Overview" (PDF). 2019 UNESCO International Conference Language Technologies for All (LT4All): Enabling Linguistic Diversity and Multilingualism Worldwide. 4–6 December, Paris, France: 91–94. arXiv: 2010.06778 . Archived (PDF) from the original on January 22, 2023. Retrieved January 22, 2023.{{cite journal}}: CS1 maint: location (link)
  17. Madden, Michael G. (December 15, 2023). "Google's Gemini: is the new AI model really better than ChatGPT?". The Conversation. Retrieved April 14, 2024.
  18. Foster, Megan. "What is Google Duet AI and how to use it in presentation slides". slidefill.com. Retrieved March 18, 2023.
  19. Sarin, Supheakmungkol; Pipatsrisawat, Knot; Pham, Khiem; Batra, Anurag; Valente, Luis (2019). "Crowdsource by Google: A Platform for Collecting Inclusive and Representative Machine Learning Data" (PDF). AAAI Hcomp 2019.

Further reading