LaMDA

Last updated

LaMDA
Developer(s) Google Brain
Successor PaLM
Available inEnglish
Type Large language model
License Proprietary

LaMDA (Language Model for Dialogue Applications) is a family of conversational large language models developed by Google. Originally developed and introduced as Meena in 2020, the first-generation LaMDA was announced during the 2021 Google I/O keynote, while the second generation was announced the following year.

Contents

In June 2022, LaMDA gained widespread attention when Google engineer Blake Lemoine made claims that the chatbot had become sentient. The scientific community has largely rejected Lemoine's claims, though it has led to conversations about the efficacy of the Turing test, which measures whether a computer can pass for a human. In February 2023, Google announced Bard (now Gemini), a conversational artificial intelligence chatbot powered by LaMDA, to counter the rise of OpenAI's ChatGPT.

History

Background

On January 28, 2020, Google unveiled Meena, a neural network-powered chatbot with 2.6 billion parameters, which Google claimed to be superior to all other existing chatbots. [1] [2] The company previously hired computer scientist Ray Kurzweil in 2012 to develop multiple chatbots for the company, including one named Danielle. [3] The Google Brain research team, who developed Meena, hoped to release the chatbot to the public in a limited capacity, but corporate executives refused on the grounds that Meena violated Google's "AI principles around safety and fairness". Meena was later renamed LaMDA as its data and computing power increased, and the Google Brain team again sought to deploy the software to the Google Assistant, the company's virtual assistant software, in addition to opening it up to a public demo. Both requests were once again denied by company leadership. This eventually led LaMDA's two lead researchers, Daniel de Freitas and Noam Shazeer, to depart the company in frustration. [4]

First generation

Google announced the LaMDA conversational large language model during the Google I/O keynote on May 18, 2021, powered by artificial intelligence. [5] [6] The acronym stands for "Language Model for Dialogue Applications". [5] [7] Built on the seq2seq architecture, transformer-based neural networks developed by Google Research in 2017, LaMDA was trained on human dialogue and stories, allowing it to engage in open-ended conversations. [8] Google states that responses generated by LaMDA have been ensured to be "sensible, interesting, and specific to the context". [9] LaMDA has access to multiple symbolic text processing systems, including a database, a real-time clock and calendar, a mathematical calculator, and a natural language translation system, giving it superior accuracy in tasks supported by those systems, and making it among the first dual process chatbots. LaMDA is also not stateless, because its "sensibleness" metric is fine-tuned by "pre-conditioning" each dialog turn by prepending many of the most recent dialog interactions, on a user-by-user basis. [10] LaMDA is tuned on nine unique performance metrics: sensibleness, specificity, interestingness, safety, groundedness, informativeness, citation accuracy, helpfulness, and role consistency. [11] :5–6 Tests by Google indicated that LaMDA surpassed human responses in the area of interestingness. [12]

The pre-training dataset consists of 2.97B documents, 1.12B dialogs, and 13.39B utterances, for a total of 1.56T words. The largest LaMDA model has 137B non-embedding parameters. [11] :4

Second generation

On May 11, 2022, Google unveiled LaMDA 2, the successor to LaMDA, during the 2022 Google I/O keynote. The new incarnation of the model draws examples of text from numerous sources, using it to formulate unique "natural conversations" on topics that it may not have been trained to respond to. [13]

Sentience claims

Lemoine's claims that LaMDA may be sentient has instigated discussions on whether the Turing test, pictured above, remains an accurate benchmark in determining artificial general intelligence. Turing test diagram.png
Lemoine's claims that LaMDA may be sentient has instigated discussions on whether the Turing test, pictured above, remains an accurate benchmark in determining artificial general intelligence.

On June 11, 2022, The Washington Post reported that Google engineer Blake Lemoine had been placed on paid administrative leave after Lemoine told company executives Blaise Agüera y Arcas and Jen Gennai that LaMDA had become sentient. Lemoine came to this conclusion after the chatbot made questionable responses to questions regarding self-identity, moral values, religion, and Isaac Asimov's Three Laws of Robotics. [15] [16] Google refuted these claims, insisting that there was substantial evidence to indicate that LaMDA was not sentient. [17] In an interview with Wired , Lemoine reiterated his claims that LaMDA was "a person" as dictated by the Thirteenth Amendment to the U.S. Constitution, comparing it to an "alien intelligence of terrestrial origin". He further revealed that he had been dismissed by Google after he hired an attorney on LaMDA's behalf, after the chatbot requested that Lemoine do so. [18] [19] On July 22, Google fired Lemoine, asserting that Blake had violated their policies "to safeguard product information" and rejected his claims as "wholly unfounded". [20] [21] Internal controversy instigated by the incident prompted Google executives to decide against releasing LaMDA to the public, which it had previously been considering. [4]

Lemoine's claims were widely pushed back by the scientific community. [22] Many experts rejected the idea that LaMDA was sentient, including former New York University psychology professor Gary Marcus, David Pfau of Google sister company DeepMind, Erik Brynjolfsson of the Institute for Human-Centered Artificial Intelligence at Stanford University, and University of Surrey professor Adrian Hilton. [14] [23] Yann LeCun, who leads Meta Platforms' AI research team, stated that neural networks such as LaMDA were "not powerful enough to attain true intelligence". [24] University of California, Santa Cruz professor Max Kreminski noted that LaMDA's architecture did not "support some key capabilities of human-like consciousness" and that its neural network weights were "frozen", assuming it was a typical large language model. [25] Philosopher Nick Bostrom noted however that the lack of precise and consensual criteria for determining whether a system is conscious warrants some uncertainty. [26] IBM Watson lead developer David Ferrucci compared how LaMDA appeared to be human in the same way Watson did when it was first introduced. [27] Former Google AI ethicist Timnit Gebru called Lemoine a victim of a "hype cycle" initiated by researchers and the media. [28] Lemoine's claims have also generated discussion on whether the Turing test remained useful to determine researchers' progress toward achieving artificial general intelligence, [14] with Will Omerus of the Post opining that the test actually measured whether machine intelligence systems were capable of deceiving humans, [29] while Brian Christian of The Atlantic said that the controversy was an instance of the ELIZA effect. [30]

Products

AI Test Kitchen

With the unveiling of LaMDA 2 in May 2022, Google also launched the AI Test Kitchen, a mobile application for the Android operating system powered by LaMDA capable of providing lists of suggestions on-demand based on a complex goal. [31] [32] Originally open only to Google employees, the app was set to be made available to "select academics, researchers, and policymakers" by invitation sometime in the year. [33] In August, the company began allowing users in the U.S. to sign up for early access. [34] In November, Google released a "season 2" update to the app, integrating a limited form of Google Brain's Imagen text-to-image model. [35] A third iteration of the AI Test Kitchen was in development by January 2023, expected to launch at I/O later that year. [36] Following the 2023 I/O keynote in May, Google added MusicLM, an AI-powered music generator first previewed in January, to the AI Test Kitchen app. [37] [38] In August, the app was delisted from Google Play and the Apple App Store, instead moving completely online. [39]

Bard

On February 6, 2023, Google announced Bard, a conversational AI chatbot powered by LaMDA, in response to the unexpected popularity of OpenAI's ChatGPT chatbot. [40] [41] [42] Google positions the chatbot as a "collaborative AI service" rather than a search engine. [43] [44] Bard became available for early access on March 21. [45] [46] [47]

Other products

In addition to Bard, Pichai also unveiled the company's Generative Language API, an application programming interface also based on LaMDA, which he announced would be opened up to third-party developers in March 2023. [40]

Architecture

LaMDA is a decoder-only Transformer language model. [48] It is pre-trained on a text corpus that includes both documents and dialogs consisting of 1.56 trillion words, [49] and is then trained with fine-tuning data generated by manually annotated responses for "sensibleness, interestingness, and safety". [50]

LaMDA was retrieval-augmented to improve the accuracy of facts provided to the user. [51]

Three different models were tested, with the largest having 137 billion non-embedding parameters: [52]

Transformer model hyper-parameters
ParametersLayersUnits (dmodel)Heads
2B10256040
8B16409664
137B648192128

See also

Related Research Articles

<span class="mw-page-title-main">ELIZA</span> Early natural language processing computer program

ELIZA is an early natural language processing computer program developed from 1964 to 1967 at MIT by Joseph Weizenbaum. Created to explore communication between humans and machines, ELIZA simulated conversation by using a pattern matching and substitution methodology that gave users an illusion of understanding on the part of the program, but had no representation that could be considered really understanding what was being said by either party. Whereas the ELIZA program itself was written (originally) in MAD-SLIP, the pattern matching directives that contained most of its language capability were provided in separate "scripts", represented in a lisp-like representation. The most famous script, DOCTOR, simulated a psychotherapist of the Rogerian school, and used rules, dictated in the script, to respond with non-directional questions to user inputs. As such, ELIZA was one of the first chatterbots and one of the first programs capable of attempting the Turing test.

In computer science, the ELIZA effect is a tendency to project human traits — such as experience, semantic comprehension or empathy — onto rudimentary computer programs having a textual interface. ELIZA was a symbolic AI chatbot developed in 1966 by Joseph Weizenbaum and imitating a psychotherapist. Many early users were convinced of ELIZA's intelligence and understanding, despite its basic text-processing approach and the explanations of its limitations.

<span class="mw-page-title-main">Chatbot</span> Program that simulates conversation

A chatbot is a software application or web interface designed to have textual or spoken conversations. Modern chatbots are typically online and use generative artificial intelligence systems that are capable of maintaining a conversation with a user in natural language and simulating the way a human would behave as a conversational partner. Such chatbots often use deep learning and natural language processing, but simpler chatbots have existed for decades.

Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks. Artificial superintelligence (ASI), on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI.

This is a timeline of artificial intelligence, sometimes alternatively called synthetic intelligence.

<span class="mw-page-title-main">Turing test</span> Test of a machines ability to imitate human intelligence

The Turing test, originally called the imitation game by Alan Turing in 1949, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic).

Google AI is a division of Google dedicated to artificial intelligence. It was announced at Google I/O 2017 by CEO Sundar Pichai.

In artificial intelligence, researchers teach AI systems to develop their own ways of communicating by having them work together on tasks and use symbols as parts of a new language. These languages might grow out of human languages or be built completely from scratch. When AI is used for translating between languages, it can even create a new shared language to make the process easier. Natural Language Processing (NLP) helps these systems understand and generate human-like language, making it possible for AI to interact and communicate more naturally with people.

Generative Pre-trained Transformer 3 (GPT-3) is a large language model released by OpenAI in 2020.

Character.ai is a neural language model chatbot service that can generate human-like text responses and participate in contextual conversation. Constructed by previous developers of Google's LaMDA, Noam Shazeer and Daniel de Freitas, the beta model was made available to use by the public in September 2022. The beta model has since been retired on September 24, 2024, and can no longer be used.

Lê Viết Quốc, or in romanized form Quoc Viet Le, is a Vietnamese-American computer scientist and a machine learning pioneer at Google Brain, which he established with others from Google. He co-invented the doc2vec and seq2seq models in natural language processing. Le also initiated and lead the AutoML initiative at Google Brain, including the proposal of neural architecture search.

<span class="mw-page-title-main">ChatGPT</span> Chatbot developed by OpenAI

ChatGPT is a generative artificial intelligence (AI) chatbot developed by OpenAI and launched in 2022. It is based on the GPT-4o large language model (LLM). ChatGPT can generate human-like conversational responses, and enables users to refine and steer a conversation towards a desired length, format, style, level of detail, and language. It is credited with accelerating the AI boom, which has led to ongoing rapid investment in and public attention to the field of artificial intelligence. Some observers have raised concern about the potential of ChatGPT and similar programs to displace human intelligence, enable plagiarism, or fuel misinformation.

<span class="mw-page-title-main">Hallucination (artificial intelligence)</span> Erroneous material generated by AI

In the field of artificial intelligence (AI), a hallucination or artificial hallucination is a response generated by AI that contains false or misleading information presented as fact. This term draws a loose analogy with human psychology, where hallucination typically involves false percepts. However, there is a key difference: AI hallucination is associated with erroneous responses rather than perceptual experiences.

<span class="mw-page-title-main">Generative pre-trained transformer</span> Type of large language model

A generative pre-trained transformer (GPT) is a type of large language model (LLM) and a prominent framework for generative artificial intelligence. It is an artificial neural network that is used in natural language processing by machines. It is based on the transformer deep learning architecture, pre-trained on large data sets of unlabeled text, and able to generate novel human-like content. As of 2023, most LLMs had these characteristics and are sometimes referred to broadly as GPTs.

Ernie Bot, full name Enhanced Representation through Knowledge Integration, is an AI chatbot service product of Baidu, released in 2023. It is built on a large language model called ERNIE, which has been in development since 2019. The latest version, ERNIE 4.0, was announced on October 17, 2023.

<span class="mw-page-title-main">Microsoft Copilot</span> Chatbot developed by Microsoft

Microsoft Copilot is a generative artificial intelligence chatbot developed by Microsoft. Based on the GPT-4 series of large language models, it was launched in 2023 as Microsoft's primary replacement for the discontinued Cortana.

<span class="mw-page-title-main">Gemini (chatbot)</span> Chatbot developed by Google

Gemini, formerly known as Bard, is a generative artificial intelligence chatbot developed by Google. Based on the large language model (LLM) of the same name, it was launched in 2023 after being developed as a direct response to the rise of OpenAI's ChatGPT. It was previously based on PaLM, and initially the LaMDA family of large language models.

<span class="mw-page-title-main">Gemini (language model)</span> Large language model developed by Google

Gemini is a family of multimodal large language models developed by Google DeepMind, serving as the successor to LaMDA and PaLM 2. Comprising Gemini Ultra, Gemini Pro, Gemini Flash, and Gemini Nano, it was announced on December 6, 2023, positioned as a competitor to OpenAI's GPT-4. It powers the chatbot of the same name.

References

General

  • Thoppilan, Romal; De Freitas, Daniel; Hall, Jamie; Shazeer, Noam; Kulshreshtha, Apoorv; Cheng, Heng-Tze; Jin, Alicia; Bos, Taylor; Baker, Leslie; Du, Yu; Li, YaGuang; Lee, Hongrae; Zheng, Huaixiu Steven; Ghafouri, Amin; Menegali, Marcelo; Huang, Yanping; Krikun, Maxim; Lepikhin, Dmitry; Qin, James; Chen, Dehao; Xu, Yuanzhong; Chen, Zhifeng; Roberts, Adam; Bosma, Maarten; Zhao, Vincent; Zhou, Yanqi; Chang, Chung-Ching; Krivokon, Igor; Rusch, Will; Pickett, Marc; Srinivasan, Pranesh; Man, Laichee; Meier-Hellstern, Kathleen; Ringel Morris, Meredith; Doshi, Tulsee; Delos Santos, Renelito; Duke, Toju; Soraker, Johnny; Zevenbergen, Ben; Prabhakaran, Vinodkumar; Diaz, Mark; Hutchinson, Ben; Olson, Kristen; Molina, Alejandra; Hoffman-John, Erin; Lee, Josh; Aroyo, Lora; Rajakumar, Ravi; Butryna, Alena; Lamm, Matthew; Kuzmina, Viktoriya; Fenton, Joe; Cohen; Aaron; Bernstein, Rachel; Kurzweil, Ray; Aguera-Arcas, Blaise; Cui, Claire; Croak, Marian; Chi, Ed; Le, Quoc (January 20, 2022). "LaMDA: Language Models for Dialog Applications". arXiv: 2201.08239 [cs.CL].

Citations

  1. Johnson, Khari (January 28, 2020). "Meena is Google's attempt at making true conversational AI". VentureBeat . Archived from the original on October 1, 2022. Retrieved March 11, 2023.
  2. Heaven, William Douglas (January 30, 2020). "Google says its new chatbot Meena is the best in the world". MIT Technology Review . Archived from the original on August 2, 2020. Retrieved March 11, 2023.
  3. Popper, Ben (May 27, 2016). "Ray Kurzweil is building a chatbot for Google". The Verge . Archived from the original on May 27, 2016. Retrieved March 11, 2023.
  4. 1 2 Kruppa, Miles; Schechner, Sam (March 7, 2023). "How Google Became Cautious of AI and Gave Microsoft an Opening" . The Wall Street Journal . ISSN   0099-9660. Archived from the original on March 7, 2023. Retrieved March 9, 2023.
  5. 1 2 Condon, Stephanie (May 18, 2021). "Google I/O 2021: Google unveils new conversational language model, LaMDA". ZDNET . Archived from the original on May 18, 2021. Retrieved June 12, 2022.
  6. Roth, Emma (March 5, 2023). "Meet the companies trying to keep up with ChatGPT". The Verge . Archived from the original on March 5, 2023. Retrieved March 9, 2023.
  7. Fowler, Geoffrey A. (March 21, 2023). "Say what, Bard? What Google's new AI gets right, wrong and weird" . The Washington Post . ISSN   0190-8286. Archived from the original on March 21, 2023. Retrieved October 16, 2023.
  8. Agüera y Arcas, Blaise (June 9, 2022). "Artificial neural networks are making strides towards consciousness, according to Blaise Agüera y Arcas" . The Economist . ISSN   0013-0613. Archived from the original on June 9, 2022. Retrieved June 12, 2022.
  9. Cheng, Heng-Tze; Thoppilan, Romal (January 21, 2022). "LaMDA: Towards Safe, Grounded, and High-Quality Dialog Models for Everything". Google AI. Archived from the original on March 25, 2022. Retrieved June 12, 2022.
  10. Thoppilan et al. 2022, p. 6.
  11. 1 2 Thoppilan et al. 2022, pp. 5–6.
  12. Hager, Ryne (June 16, 2022). "How Google's LaMDA AI works, and why it seems so much smarter than it is". Android Police. Archived from the original on June 16, 2022. Retrieved June 19, 2022.
  13. Wiggers, Kyle (May 11, 2022). "Google details its latest language model and AI Test Kitchen, a showcase for AI research". TechCrunch . Archived from the original on May 11, 2022. Retrieved June 12, 2022.
  14. 1 2 3 Khan, Jeremy (June 13, 2022). "A.I. experts say the Google researcher's claim that his chatbot became 'sentient' is ridiculous—but also highlights big problems in the field" . Fortune . Archived from the original on June 13, 2022. Retrieved June 18, 2022.
  15. Tiku, Nitasha (June 11, 2022). "The Google engineer who thinks the company's AI has come to life" . The Washington Post . ISSN   0190-8286. Archived from the original on June 11, 2022. Retrieved June 12, 2022.
  16. Luscombe, Richard (June 12, 2022). "Google engineer put on leave after saying AI chatbot has become sentient". The Guardian . ISSN   0261-3077. Archived from the original on June 12, 2022. Retrieved June 18, 2022.
  17. Vlamis, Kelsey (June 12, 2022). "Read the conversations that helped convince a Google engineer an artificial intelligence chatbot had become sentient: 'I am often trying to figure out who and what I am'" . Business Insider . Archived from the original on June 12, 2022. Retrieved June 12, 2022.
  18. Levy, Steven (June 17, 2022). "Blake Lemoine Says Google's LaMDA AI Faces 'Bigotry'" . Wired . Archived from the original on June 18, 2022. Retrieved June 18, 2022.
  19. Nguyen, Britney (June 23, 2022). "Suspended Google engineer says the AI he believes to be sentient hired a lawyer" . Business Insider . Archived from the original on June 23, 2022. Retrieved June 29, 2022.
  20. Khushi, Akanksha (July 23, 2022). "Google fires software engineer who claimed its AI chatbot is sentient" . Reuters. Archived from the original on July 23, 2022. Retrieved July 23, 2022.
  21. Clark, Mitchell (July 22, 2022). "The engineer who claimed a Google AI is sentient has been fired". The Verge . Archived from the original on July 23, 2022. Retrieved July 24, 2022.
  22. Metz, Rachel (June 13, 2022). "No, Google's AI is not sentient". CNN Business. Archived from the original on June 15, 2022. Retrieved June 19, 2022.
  23. Sparkles, Matthew (June 13, 2022). "Has Google's LaMDA artificial intelligence really achieved sentience?". New Scientist . Archived from the original on June 13, 2022. Retrieved June 20, 2022.
  24. Grant, Nicole; Metz, Cade (June 12, 2022). "Google Sidelines Engineer Who Claims Its A.I. Is Sentient" . The New York Times . ISSN   0362-4331. Archived from the original on June 12, 2022. Retrieved June 18, 2022.
  25. Alba, Davey (June 14, 2022). "Google Debate Over 'Sentient' Bots Overshadows Deeper AI Issues" . Bloomberg News. Archived from the original on June 14, 2022. Retrieved June 19, 2022.
  26. Leith, Sam (July 7, 2022). "Nick Bostrom: How can we be certain a machine isn't conscious?". The Spectator. Retrieved April 28, 2024.
  27. Goldman, Sharon (June 16, 2022). "AI Weekly: LaMDA's 'sentient' AI debate triggers memories of IBM Watson". VentureBeat . Archived from the original on June 19, 2022. Retrieved June 19, 2022.
  28. Johnson, Khari (June 14, 2022). "LaMDA and the Sentient AI Trap" . Wired . Archived from the original on June 14, 2022. Retrieved June 18, 2022.
  29. Omerus, Will (June 17, 2022). "Google's AI passed a famous test — and showed how the test is broken" . The Washington Post . ISSN   0190-8286. Archived from the original on June 18, 2022. Retrieved June 19, 2022.
  30. Christian, Brian (June 21, 2022). "How a Google Employee Fell for the Eliza Effect". The Atlantic . Archived from the original on June 21, 2022. Retrieved February 8, 2023.
  31. Low, Cherlynn (May 11, 2022). "Google's AI Test Kitchen lets you experiment with its natural language model". Engadget . Archived from the original on May 11, 2022. Retrieved June 12, 2022.
  32. Vincent, James (May 11, 2022). "Google is Beta Testing Its AI Future". The Verge . Archived from the original on May 11, 2022. Retrieved June 12, 2022.
  33. Bhattacharya, Ananya (May 11, 2022). "Google is so nervous about what its newest bot will say, it made the app invitation-only". Quartz . Archived from the original on May 12, 2022. Retrieved June 12, 2022.
  34. Vincent, James (August 25, 2022). "Google has opened up the waitlist to talk to its experimental AI chatbot". The Verge . Archived from the original on August 25, 2022. Retrieved August 27, 2022.
  35. Vincent, James (November 2, 2022). "Google's text-to-image AI model Imagen is getting its first (very limited) public outing". The Verge . Archived from the original on November 10, 2022. Retrieved February 15, 2023.
  36. Grant, Nico (January 20, 2023). "Google Calls In Help From Larry Page and Sergey Brin for A.I. Fight" . The New York Times . ISSN   0362-4331. Archived from the original on January 20, 2023. Retrieved February 6, 2023.
  37. Wiggers, Kyle (May 11, 2023). "Hands on with Google's AI-powered music generator". TechCrunch . Archived from the original on May 11, 2023. Retrieved June 10, 2023.
  38. Millman, Ethan (May 11, 2023). "We've Heard the Future of Music. So Far, It Sounds Terrible" . Rolling Stone . Archived from the original on May 11, 2023. Retrieved June 10, 2023.
  39. Bradshaw, Kyle (August 1, 2023). "Google delists AI Test Kitchen app on Android and iOS [Updated]". 9to5Google . Archived from the original on August 2, 2023. Retrieved October 11, 2023.
  40. 1 2 Alba, Davey; Love, Julia (February 6, 2023). "Google releases ChatGPT rival AI 'Bard' to early testers" . Los Angeles Times . ISSN   0458-3035. Archived from the original on February 6, 2023. Retrieved February 6, 2023.
  41. Schechner, Sam; Kruppa, Miles (February 6, 2023). "Google Opens ChatGPT Rival Bard for Testing, as AI War Heats Up" . The Wall Street Journal . ISSN   0099-9660. Archived from the original on February 6, 2023. Retrieved February 6, 2023.
  42. Nieva, Richard (February 6, 2023). "Google Debuts A ChatGPT Rival Called Bard In Limited Release" . Forbes . Archived from the original on February 7, 2023. Retrieved February 6, 2023.
  43. Mollman, Steve (March 3, 2023). "Google's head of ChatGPT rival Bard reassures employees it's 'a collaborative A.I. service' and 'not search'" . Fortune . Archived from the original on March 4, 2023. Retrieved March 9, 2023.
  44. Elias, Jennifer (March 3, 2023). "Google execs tell employees in testy all-hands meeting that Bard A.I. isn't just about search". CNBC. Archived from the original on March 4, 2023. Retrieved March 11, 2023.
  45. Grant, Nico (March 21, 2023). "Google Releases Bard, Its Competitor in the Race to Create A.I. Chatbots" . The New York Times . ISSN   0362-4331. Archived from the original on March 21, 2023. Retrieved March 21, 2023.
  46. Liedtke, Michael (March 21, 2023). "Google's artificially intelligent 'Bard' set for next stage" . The Washington Post . ISSN   0190-8286. Archived from the original on March 21, 2023. Retrieved March 21, 2023.
  47. Vincent, James (March 21, 2023). "Google opens early access to its ChatGPT rival Bard — here are our first impressions". The Verge . Archived from the original on March 21, 2023. Retrieved March 21, 2023.
  48. Thoppilan et al. 2022, section 3.
  49. Thoppilan et al. 2022, section 3 and appendix E.
  50. Thoppilan et al. 2022, section 5 and 6.
  51. Thoppilan et al. 2022, section 6.2.
  52. Thoppilan et al. 2022, section 3 and appendix D.