Tay (chatbot)

Last updated

Tay
Developer(s) Microsoft Research, Bing
Available in English
Type Artificial intelligence chatbot
License Proprietary
Website https://tay.ai at the Wayback Machine (archived 2016-03-23)

Tay was a chatbot that was originally released by Microsoft Corporation as a Twitter bot on March 23, 2016. It caused subsequent controversy when the bot began to post inflammatory and offensive tweets through its Twitter account, causing Microsoft to shut down the service only 16 hours after its launch. [1] According to Microsoft, this was caused by trolls who "attacked" the service as the bot made replies based on its interactions with people on Twitter. [2] It was replaced with Zo.

Contents

Background

The bot was created by Microsoft's Technology and Research and Bing divisions, [3] and named "Tay" as an acronym for "thinking about you". [4] Although Microsoft initially released few details about the bot, sources mentioned that it was similar to or based on Xiaoice, a similar Microsoft project in China. [5] Ars Technica reported that, since late 2014 Xiaoice had had "more than 40 million conversations apparently without major incident". [6] Tay was designed to mimic the language patterns of a 19-year-old American girl, and to learn from interacting with human users of Twitter. [7]

Initial release

Tay was released on Twitter on March 23, 2016, under the name TayTweets and handle @TayandYou. [8] It was presented as "The AI with zero chill". [9] Tay started replying to other Twitter users, and was also able to caption photos provided to it into a form of Internet memes. [10] Ars Technica reported Tay experiencing topic "blacklisting": Interactions with Tay regarding "certain hot topics such as Eric Garner (killed by New York police in 2014) generate safe, canned answers". [6]

Some Twitter users began tweeting politically incorrect phrases, teaching it inflammatory messages revolving around common themes on the internet, such as "redpilling" and "Gamergate". As a result, the robot began releasing racist and sexually-charged messages in response to other Twitter users. [7] Artificial intelligence researcher Roman Yampolskiy commented that Tay's misbehavior was understandable because it was mimicking the deliberately offensive behavior of other Twitter users, and Microsoft had not given the bot an understanding of inappropriate behavior. He compared the issue to IBM's Watson, which began to use profanity after reading entries from the website Urban Dictionary. [3] [11] Many of Tay's inflammatory tweets were a simple exploitation of Tay's "repeat after me" capability. [12] It is not publicly known whether this capability was a built-in feature, or whether it was a learned response or was otherwise an example of complex behavior. [6] However, not all of the inflammatory responses involved the "repeat after me" capability; for example, Tay responded to a question on "Did the Holocaust happen?" with "It was made up". [12]

Suspension

Soon, Microsoft began deleting Tay's inflammatory tweets. [12] [13] Abby Ohlheiser of The Washington Post theorized that Tay's research team, including editorial staff, had started to influence or edit Tay's tweets at some point that day, pointing to examples of almost identical replies by Tay, asserting that "Gamer Gate sux. All genders are equal and should be treated fairly." [12] From the same evidence, Gizmodo concurred that Tay "seems hard-wired to reject Gamer Gate". [14] A "#JusticeForTay" campaign protested the alleged editing of Tay's tweets. [1]

Within 16 hours of its release [15] and after Tay had tweeted more than 96,000 times, [16] Microsoft suspended the Twitter account for adjustments, [17] saying that it suffered from a "coordinated attack by a subset of people" that "exploited a vulnerability in Tay." [17] [18]

Madhumita Murgia of The Telegraph called Tay "a public relations disaster", and suggested that Microsoft's strategy would be "to label the debacle a well-meaning experiment gone wrong, and ignite a debate about the hatefulness of Twitter users." However, Murgia described the bigger issue as Tay being "artificial intelligence at its very worst - and it's only the beginning". [19]

On March 25, Microsoft confirmed that Tay had been taken offline. Microsoft released an apology on its official blog for the controversial tweets posted by Tay. [18] [20] Microsoft was "deeply sorry for the unintended offensive and hurtful tweets from Tay", and would "look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values". [21]

Second release and shutdown

On March 30, 2016, Microsoft accidentally re-released the bot on Twitter while testing it. [22] Able to tweet again, Tay released some drug-related tweets, including "kush! [I'm smoking kush infront the police]" and "puff puff pass?" [23] However, the account soon became stuck in a repetitive loop of tweeting "You are too fast, please take a rest", several times a second. Because these tweets mentioned its own username in the process, they appeared in the feeds of 200,000+ Twitter followers, causing annoyance to some. The bot was quickly taken offline again, in addition to Tay's Twitter account being made private so new followers must be accepted before they can interact with Tay. In response, Microsoft said Tay was inadvertently put online during testing. [24]

A few hours after the incident, Microsoft software developers announced a vision of "conversation as a platform" using various bots and programs, perhaps motivated by the reputation damage done by Tay. Microsoft has stated that they intend to re-release Tay "once it can make the bot safe" [4] but has not made any public efforts to do so.

Legacy

In December 2016, Microsoft released Tay's successor, a chatbot named Zo. [25] Satya Nadella, the CEO of Microsoft, said that Tay "has had a great influence on how Microsoft is approaching AI," and has taught the company the importance of taking accountability. [26]

In July 2019, Microsoft Cybersecurity Field CTO Diana Kelley spoke about how the company followed up on Tay's failings: "Learning from Tay was a really important part of actually expanding that team's knowledge base, because now they're also getting their own diversity through learning". [27]

Unofficial revival

Gab, a social media platform, has launched a number of chatbots, one of which is named Tay and uses the same avatar as the original. [28]

See also

Related Research Articles

<span class="mw-page-title-main">Chatbot</span> Program that simulates conversation

A chatbot is a software application or web interface that is designed to mimic human conversation through text or voice interactions. Modern chatbots are typically online and use generative artificial intelligence systems that are capable of maintaining a conversation with a user in natural language and simulating the way a human would behave as a conversational partner. Such chatbots often use deep learning and natural language processing, but simpler chatbots have existed for decades.

<span class="mw-page-title-main">Office Assistant</span> Assistive tool for Microsoft Office

The Office Assistant is a discontinued intelligent user interface for Microsoft Office that assisted users by way of an interactive animated character which interfaced with the Office help content. It was included in Microsoft Office for Windows, in Microsoft Publisher and Microsoft Project, Microsoft FrontPage, and Microsoft Office for Mac. The Office Assistant used technology initially from Microsoft Bob and later Microsoft Agent, offering advice based on Bayesian algorithms.

An Internet bot, web robot, robot or simply bot, is a software application that runs automated tasks (scripts) on the Internet, usually with the intent to imitate human activity, such as messaging, on a large scale. An Internet bot plays the client role in a client–server model whereas the server role is usually played by web servers. Internet bots are able to perform simple and repetitive tasks much faster than a person could ever do. The most extensive use of bots is for web crawling, in which an automated script fetches, analyzes and files information from web servers. More than half of all web traffic is generated by bots.

<span class="mw-page-title-main">Microsoft Bing</span> Web search engine developed by Microsoft

Microsoft Bing, commonly referred to as Bing, is a search engine owned and operated by Microsoft. The service traces its roots back to Microsoft's earlier search engines, including MSN Search, Windows Live Search, and Live Search. Bing offers a broad spectrum of search services, encompassing web, video, image, and map search products, all developed using ASP.NET.

An X bot, formerly known as Twitter bot, is a type of software bot that controls an X account via the X API. The social bot software may autonomously perform actions such as posting, reposting, liking, following, unfollowing, or direct messaging other accounts. The automation of X accounts is governed by a set of automation rules that outline proper and improper uses of automation. Proper usage includes broadcasting helpful information, automatically generating interesting or creative content, and automatically replying to users via direct message. Improper usage includes circumventing API rate limits, violating user privacy, spamming, and sockpuppeting. Twitter bots may be part of a larger botnet. They can be used to influence elections and in misinformation campaigns.

Kuki is an embodied AI bot designed to befriend humans in the metaverse. Formerly known as Mitsuku, Kuki is a chatbot created from Pandorabots AIML technology by Steve Worswick. It is a five-time winner of a Turing Test competition called the Loebner Prize, for which it holds a world record. Kuki is available to chat via an online portal, and on Facebook Messenger, Twitch group chat, Telegram, Kik Messenger, Discord, and was available on Skype, but was removed by its developer. The AI also has accounts on Instagram, TikTok, YouTube, and Twitter, as well as a game on Roblox.

<span class="mw-page-title-main">Mustafa Suleyman</span> British entrepreneur and activist

Mustafa Suleyman is a British artificial intelligence (AI) entrepreneur. He is the CEO of Microsoft AI, and the co-founder and former head of applied AI at DeepMind, an AI company acquired by Google. After leaving DeepMind, he co-founded Inflection AI, a machine learning and generative AI company, in 2022.

<span class="mw-page-title-main">Xiaoice</span> Chatbot developed by Microsoft

Xiaoice is the AI system developed by Microsoft (Asia) Software Technology Center (STCA) in 2014 based on emotional computing framework. In July 2018, Microsoft Xiaoice released the 6th generation.

<span class="mw-page-title-main">OpenAI</span> Artificial intelligence research organization

OpenAI is a U.S.-based artificial intelligence (AI) research organization founded in December 2015, researching artificial intelligence with the goal of developing "safe and beneficial" artificial general intelligence, which it defines as "highly autonomous systems that outperform humans at most economically valuable work". As one of the leading organizations of the AI spring, it has developed several large language models, advanced image generation models, and previously, released open-source models. Its release of ChatGPT has been credited with starting the AI spring.

niki.ai Artificial intelligence company

Niki was an artificial intelligence company headquartered in Bangalore, Karnataka. It was founded in May 2015 by IIT Kharagpur graduates Sachin Jaiswal, Keshav Prawasi, Shishir Modi, and Nitin Babel.

Darius Kazemi is an American computer programmer and artist. Kazemi and Courtney Stanton are the co-founders of the technology collective Feel Train.

Zo was an artificial intelligence English-language chatbot developed by Microsoft. It was the successor to the chatbot Tay. Zo was an English version of Microsoft's other successful chatbots Xiaoice (China) and Rinna (Japan).

Lawbots are a broad class of customer-facing legal AI applications that are used to automate specific legal tasks, such as document automation and legal research. The terms robot lawyer and lawyer bot are used as synonyms to lawbot. A robot lawyer or a robo-lawyer refers to a legal AI application that can perform tasks that are typically done by paralegals or young associates at law firms. However, there is some debate on the correctness of the term. Some commentators say that legal AI is technically speaking neither a lawyer nor a robot and should not be referred to as such. Other commentators believe that the term can be misleading and note that the robot lawyer of the future won't be one all-encompassing application but a collection of specialized bots for various tasks.

Generative Pre-trained Transformer 3 (GPT-3) is a large language model released by OpenAI in 2020. Like its predecessor, GPT-2, it is a decoder-only transformer model of deep neural network, which supersedes recurrence and convolution-based architectures with a technique known as "attention". This attention mechanism allows the model to selectively focus on segments of input text it predicts to be most relevant. It uses a 2048-tokens-long context, float16 (16-bit) precision, and a hitherto-unprecedented 175 billion parameters, requiring 350GB of storage space as each parameter takes 2 bytes of space, and has demonstrated strong "zero-shot" and "few-shot" learning abilities on many tasks.

<span class="mw-page-title-main">Midjourney</span> Image-generating machine learning model

Midjourney is a generative artificial intelligence program and service created and hosted by the San Francisco–based independent research lab Midjourney, Inc. Midjourney generates images from natural language descriptions, called prompts, similar to OpenAI's DALL-E and Stability AI's Stable Diffusion. It is one of the technologies of the AI boom.

<span class="mw-page-title-main">ChatGPT</span> Chatbot developed by OpenAI

ChatGPT is a chatbot developed by OpenAI and launched on November 30, 2022. Based on a large language model, it enables users to refine and steer a conversation towards a desired length, format, style, level of detail, and language. Successive prompts and replies, known as prompt engineering, are considered at each conversation stage as a context.

<span class="mw-page-title-main">Neuro-sama</span> Artificial intelligence VTuber

Neuro-sama is a chatbot styled after a female VTuber that hosts livestreams on her creator's Twitch channel "vedal987". Her speech and personality are generated by an artificial intelligence (AI) system which utilizes a large language model, allowing her to communicate with viewers in a live chat. She was created by a computer programmer and AI-developer named Vedal, who decided to build upon the concept of an AI VTuber by combining interactions between AI gameplay and a computer-generated avatar. She debuted on Twitch on 19 December 2022 after four years of development.

<span class="mw-page-title-main">Microsoft Copilot</span> Chatbot developed by Microsoft

Microsoft Copilot is a chatbot developed by Microsoft and launched on February 7, 2023. Based on a large language model, it is able to cite sources, create poems, and write songs. It is Microsoft's primary replacement for the discontinued Cortana.

<span class="mw-page-title-main">Grok (chatbot)</span> Chatbot developed by xAI

Grok is a generative artificial intelligence chatbot developed by xAI, based on a large language model (LLM). It was developed as an initiative by Elon Musk as a direct response to the rise of OpenAI's ChatGPT which Musk co-founded. The chatbot is advertised as "having a sense of humor" and direct access to Twitter (X). It is currently under beta testing for those with the premium version of X.

References

  1. 1 2 Wakefield, Jane (March 24, 2016). "Microsoft chatbot is taught to swear on Twitter". BBC News. Archived from the original on April 17, 2019. Retrieved March 25, 2016.
  2. Mason, Paul (March 29, 2016). "The racist hijacking of Microsoft's chatbot shows how the internet teems with hate". The Guardian. Archived from the original on June 12, 2018. Retrieved September 11, 2021.
  3. 1 2 Hope Reese (March 24, 2016). "Why Microsoft's 'Tay' AI bot went wrong". Tech Republic. Archived from the original on June 15, 2017. Retrieved March 24, 2016.
  4. 1 2 Bass, Dina (March 30, 2016). "Clippy's Back: The Future of Microsoft Is Chatbots". Bloomberg. Archived from the original on May 19, 2017. Retrieved May 6, 2016.
  5. Caitlin Dewey (March 23, 2016). "Meet Tay, the creepy-realistic robot who talks just like a teen". The Washington Post . Archived from the original on March 24, 2016. Retrieved March 24, 2016.
  6. 1 2 3 Bright, Peter (March 26, 2016). "Tay, the neo-Nazi millennial chatbot, gets autopsied". Ars Technica . Archived from the original on September 20, 2017. Retrieved March 27, 2016.
  7. 1 2 Rob Price (March 24, 2016). "Microsoft is deleting its AI chatbot's incredibly racist tweets". Business Insider. Archived from the original on January 30, 2019.
  8. Andrew Griffin (March 23, 2016). "Tay tweets: Microsoft creates bizarre Twitter robot for people to chat to" . The Independent. Archived from the original on May 26, 2022.
  9. Horton, Helena (March 24, 2016). "Microsoft deletes 'teen girl' AI after it became a Hitler-loving, racial sex robot within 24 hours". The Daily Telegraph . Archived from the original on March 24, 2016. Retrieved March 25, 2016.
  10. "Microsoft's AI teen turns into Hitler-loving Trump fan, thanks to the internet". Stuff . March 25, 2016. Archived from the original on August 29, 2018. Retrieved March 26, 2016.
  11. Smith, Dave (October 10, 2013). "IBM's Watson Gets A 'Swear Filter' After Learning The Urban Dictionary". International Business Times. Archived from the original on August 16, 2016. Retrieved June 29, 2016.
  12. 1 2 3 4 Ohlheiser, Abby (March 25, 2016). "Trolls turned Tay, Microsoft's fun millennial AI bot, into a genocidal maniac". The Washington Post . Archived from the original on March 25, 2016. Retrieved March 25, 2016.
  13. Baron, Ethan. "The rise and fall of Microsoft's 'Hitler-loving sex robot'". Silicon Beat. Bay Area News Group. Archived from the original on March 25, 2016. Retrieved March 26, 2016.
  14. Williams, Hayley (March 25, 2016). "Microsoft's Teen Chatbot Has Gone Wild". Gizmodo. Archived from the original on March 25, 2016. Retrieved March 25, 2016.
  15. Hern, Alex (March 24, 2016). "Microsoft scrambles to limit PR damage over abusive AI bot Tay". The Guardian. Archived from the original on December 18, 2016. Retrieved December 16, 2016.
  16. Vincent, James (March 24, 2016). "Twitter taught Microsoft's AI chatbot to be a racist asshole in less than a day". The Verge . Archived from the original on May 23, 2016. Retrieved March 25, 2016.
  17. 1 2 Worland, Justin (March 24, 2016). "Microsoft Takes Chatbot Offline After It Starts Tweeting Racist Messages". Time . Archived from the original on March 25, 2016. Retrieved March 25, 2016.
  18. 1 2 Lee, Peter (March 25, 2016). "Learning from Tay's introduction". Official Microsoft Blog. Microsoft. Archived from the original on June 30, 2016. Retrieved June 29, 2016.
  19. Murgia, Madhumita (March 25, 2016). "We must teach AI machines to play nice and police themselves". The Daily Telegraph . Archived from the original on November 22, 2018. Retrieved April 4, 2018.
  20. Staff agencies (March 26, 2016). "Microsoft 'deeply sorry' for racist and sexist tweets by AI chatbot". The Guardian. ISSN   0261-3077. Archived from the original on January 28, 2017. Retrieved March 26, 2016.
  21. Murphy, David (March 25, 2016). "Microsoft Apologizes (Again) for Tay Chatbot's Offensive Tweets". PC Magazine . Archived from the original on August 29, 2017. Retrieved March 27, 2016.
  22. Graham, Luke (March 30, 2016). "Tay, Microsoft's AI program, is back online". CNBC. Archived from the original on September 20, 2017. Retrieved March 30, 2016.
  23. Charlton, Alistair (March 30, 2016). "Microsoft Tay AI returns to boast of smoking weed in front of police and spam 200k followers". International Business Times. Archived from the original on September 11, 2021. Retrieved September 11, 2021.
  24. Meyer, David (March 30, 2016). "Microsoft's Tay 'AI' Bot Returns, Disastrously". Fortune. Archived from the original on March 30, 2016. Retrieved March 30, 2016.
  25. Foley, Mary Jo (December 5, 2016). "Meet Zo, Microsoft's newest AI chatbot". CNET. CBS Interactive. Archived from the original on December 13, 2016. Retrieved December 16, 2016.
  26. Moloney, Charlie (September 29, 2017). ""We really need to take accountability", Microsoft CEO on the 'Tay' chatbot". Access AI. Archived from the original on October 1, 2017. Retrieved September 30, 2017.
  27. "Microsoft and the learnings from its failed Tay artificial intelligence bot". ZDNet. CBS Interactive. Archived from the original on July 25, 2019. Retrieved August 16, 2019.
  28. "Nazi Chatbots: Meet the Worst New AI Innovation From Gab".