Developer(s) | Microsoft Research, Bing |
---|---|
Available in | English |
Type | Artificial intelligence chatbot |
License | Proprietary |
Website | https://tay.ai at the Wayback Machine (archived 2016-03-23) |
Tay was a chatbot that was originally released by Microsoft Corporation as a Twitter bot on March 23, 2016. It caused subsequent controversy when the bot began to post inflammatory and offensive tweets through its Twitter account, causing Microsoft to shut down the service only 16 hours after its launch. [1] According to Microsoft, this was caused by trolls who "attacked" the service as the bot made replies based on its interactions with people on Twitter. [2] It was replaced with Zo.
The bot was created by Microsoft's Technology and Research and Bing divisions, [3] and named "Tay" as an acronym for "thinking about you". [4] Although Microsoft initially released few details about the bot, sources mentioned that it was similar to or based on Xiaoice, a similar Microsoft project in China. [5] Ars Technica reported that, since late 2014 Xiaoice had had "more than 40 million conversations apparently without major incident". [6] Tay was designed to mimic the language patterns of a 19-year-old American girl, and to learn from interacting with human users of Twitter. [7]
Tay was released on Twitter on March 23, 2016, under the name TayTweets and handle @TayandYou. [8] It was presented as "The AI with zero chill". [9] Tay started replying to other Twitter users, and was also able to caption photos provided to it into a form of Internet memes. [10] Ars Technica reported Tay experiencing topic "blacklisting": Interactions with Tay regarding "certain hot topics such as Eric Garner (killed by New York police in 2014) generate safe, canned answers". [6]
Some Twitter users began tweeting politically incorrect phrases, teaching it inflammatory messages revolving around common themes on the internet, such as "redpilling" and "Gamergate". As a result, the robot began releasing racist and sexually-charged messages in response to other Twitter users. [7] Artificial intelligence researcher Roman Yampolskiy commented that Tay's misbehavior was understandable because it was mimicking the deliberately offensive behavior of other Twitter users, and Microsoft had not given the bot an understanding of inappropriate behavior. He compared the issue to IBM's Watson, which began to use profanity after reading entries from the website Urban Dictionary. [3] [11] Many of Tay's inflammatory tweets were a simple exploitation of Tay's "repeat after me" capability. [12] It is not publicly known whether this capability was a built-in feature, or whether it was a learned response or was otherwise an example of complex behavior. [6] However, not all of the inflammatory responses involved the "repeat after me" capability; for example, Tay responded to a question on "Did the Holocaust happen?" with "It was made up". [12]
Soon, Microsoft began deleting Tay's inflammatory tweets. [12] [13] Abby Ohlheiser of The Washington Post theorized that Tay's research team, including editorial staff, had started to influence or edit Tay's tweets at some point that day, pointing to examples of almost identical replies by Tay, asserting that "Gamer Gate sux. All genders are equal and should be treated fairly." [12] From the same evidence, Gizmodo concurred that Tay "seems hard-wired to reject Gamer Gate". [14] A "#JusticeForTay" campaign protested the alleged editing of Tay's tweets. [1]
Within 16 hours of its release [15] and after Tay had tweeted more than 96,000 times, [16] Microsoft suspended the Twitter account for adjustments, [17] saying that it suffered from a "coordinated attack by a subset of people" that "exploited a vulnerability in Tay." [17] [18]
Madhumita Murgia of The Telegraph called Tay "a public relations disaster", and suggested that Microsoft's strategy would be "to label the debacle a well-meaning experiment gone wrong, and ignite a debate about the hatefulness of Twitter users." However, Murgia described the bigger issue as Tay being "artificial intelligence at its very worst – and it's only the beginning". [19]
On March 25, Microsoft confirmed that Tay had been taken offline. Microsoft released an apology on its official blog for the controversial tweets posted by Tay. [18] [20] Microsoft was "deeply sorry for the unintended offensive and hurtful tweets from Tay", and would "look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values". [21]
On March 30, 2016, Microsoft accidentally re-released the bot on Twitter while testing it. [22] Able to tweet again, Tay released some drug-related tweets, including "kush! [I'm smoking kush infront the police]" and "puff puff pass?" [23] However, the account soon became stuck in a repetitive loop of tweeting "You are too fast, please take a rest", several times a second. Because these tweets mentioned its own username in the process, they appeared in the feeds of 200,000+ Twitter followers, causing annoyance to some. The bot was quickly taken offline again, in addition to Tay's Twitter account being made private so new followers must be accepted before they can interact with Tay. In response, Microsoft said Tay was inadvertently put online during testing. [24]
A few hours after the incident, Microsoft software developers announced a vision of "conversation as a platform" using various bots and programs, perhaps motivated by the reputation damage done by Tay. Microsoft has stated that they intend to re-release Tay "once it can make the bot safe" [4] but has not made any public efforts to do so.
In December 2016, Microsoft released Tay's successor, a chatbot named Zo. [25] Satya Nadella, the CEO of Microsoft, said that Tay "has had a great influence on how Microsoft is approaching AI," and has taught the company the importance of taking accountability. [26]
In July 2019, Microsoft Cybersecurity Field CTO Diana Kelley spoke about how the company followed up on Tay's failings: "Learning from Tay was a really important part of actually expanding that team's knowledge base, because now they're also getting their own diversity through learning". [27]
Gab, a social media platform, has launched a number of chatbots, one of which is named Tay and uses the same avatar as the original. [28]
A chatbot is a software application or web interface that is designed to mimic human conversation through text or voice interactions. Modern chatbots are typically online and use generative artificial intelligence systems that are capable of maintaining a conversation with a user in natural language and simulating the way a human would behave as a conversational partner. Such chatbots often use deep learning and natural language processing, but simpler chatbots have existed for decades.
The Office Assistant is a discontinued intelligent user interface for Microsoft Office that assisted users by way of an interactive animated character which interfaced with the Office help content. It was included in Microsoft Office for Windows, in Microsoft Publisher and Microsoft Project, Microsoft FrontPage, and Microsoft Office for Mac. The Office Assistant used technology initially from Microsoft Bob and later Microsoft Agent, offering advice based on Bayesian algorithms.
An Internet bot, web robot, robot or simply bot, is a software application that runs automated tasks (scripts) on the Internet, usually with the intent to imitate human activity, such as messaging, on a large scale. An Internet bot plays the client role in a client–server model whereas the server role is usually played by web servers. Internet bots are able to perform simple and repetitive tasks much faster than a person could ever do. The most extensive use of bots is for web crawling, in which an automated script fetches, analyzes and files information from web servers. More than half of all web traffic is generated by bots.
Microsoft Bing, commonly referred to as Bing, is a search engine owned and operated by Microsoft. The service traces its roots back to Microsoft's earlier search engines, including MSN Search, Windows Live Search, and Live Search. Bing offers a broad spectrum of search services, encompassing web, video, image, and map search products, all developed using ASP.NET.
A Twitter bot is a type of software bot that controls a Twitter account via the Twitter API. The social bot software may autonomously perform actions such as tweeting, retweeting, liking, following, unfollowing, or direct messaging other accounts. The automation of Twitter accounts is governed by a set of automation rules that outline proper and improper uses of automation. Proper usage includes broadcasting helpful information, automatically generating interesting or creative content, and automatically replying to users via direct message. Improper usage includes circumventing API rate limits, violating user privacy, spamming, and sockpuppeting. Twitter bots may be part of a larger botnet. They can be used to influence elections and in misinformation campaigns.
Kuki is an embodied AI bot designed for usage in the metaverse. Formerly known as Mitsuku, Kuki is a chatbot created from the Pandorabots framework.
/pol/, short for Politically Incorrect, is an anonymous political discussion imageboard on 4chan. As of 2022, it is the most active board on the site. It has had a substantial impact on Internet culture. It has acted as a platform for far-right extremism; the board is notable for its widespread racist, white supremacist, antisemitic, Islamophobic, misogynist, and anti-LGBT content. /pol/ has been linked to various acts of real-world extremist violence. It has been described as one of the "[centers] of 4chan mobilization", a title also ascribed to /b/.
Xiaoice is the AI system developed by Microsoft (Asia) Software Technology Center (STCA) in 2014 based on emotional computing framework. In July 2018, Microsoft Xiaoice released the 6th generation.
OpenAI is an American artificial intelligence (AI) research organization founded in December 2015 and headquartered in San Francisco, California. Its mission is to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". As a leading organization in the ongoing AI boom, OpenAI is known for the GPT family of large language models, the DALL-E series of text-to-image models, and a text-to-video model named Sora. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI.
Zo was an artificial intelligence English-language chatbot developed by Microsoft. It was the successor to the chatbot Tay. Zo was an English version of Microsoft's other successful chatbots Xiaoice (China) and Rinna (Japan).
Lawbots are a broad class of customer-facing legal AI applications that are used to automate specific legal tasks, such as document automation and legal research. The terms robot lawyer and lawyer bot are used as synonyms to lawbot. A robot lawyer or a robo-lawyer refers to a legal AI application that can perform tasks that are typically done by paralegals or young associates at law firms. However, there is some debate on the correctness of the term. Some commentators say that legal AI is technically speaking neither a lawyer nor a robot and should not be referred to as such. Other commentators believe that the term can be misleading and note that the robot lawyer of the future won't be one all-encompassing application but a collection of specialized bots for various tasks.
Generative Pre-trained Transformer 3 (GPT-3) is a large language model released by OpenAI in 2020.
Meta AI is a company owned by Meta that develops artificial intelligence and augmented and artificial reality technologies. Meta AI deems itself an academic research laboratory, focused on generating knowledge for the AI community, and should not be confused with Meta's Applied Machine Learning (AML) team, which focuses on the practical applications of its products.
Midjourney is a generative artificial intelligence program and service created and hosted by the San Francisco-based independent research lab Midjourney, Inc. Midjourney generates images from natural language descriptions, called prompts, similar to OpenAI's DALL-E and Stability AI's Stable Diffusion. It is one of the technologies of the AI boom.
Character.ai is a neural language model chatbot service that can generate human-like text responses and participate in contextual conversation. Constructed by previous developers of Google's LaMDA, Noam Shazeer and Daniel De Freitas, the beta model was made available to use by the public in September 2022. The beta model has since been retired on September 24, 2024 and can no longer be used.
ChatGPT is a generative artificial intelligence (AI) chatbot developed by OpenAI and launched in 2022. It is based on the GPT-4o large language model (LLM). ChatGPT can generate human-like conversational responses, and enables users to refine and steer a conversation towards a desired length, format, style, level of detail, and language. It is credited with accelerating the AI boom, which has led to ongoing rapid investment in and public attention to the field of artificial intelligence. Some observers have raised concern about the potential of ChatGPT and similar programs to displace human intelligence, enable plagiarism, or fuel misinformation.
Neuro-sama is an AI VTuber and chatbot that livestreams on her creator's Twitch channel "vedal987". Her speech and personality are powered by an artificial intelligence (AI) system which utilizes a large language model, allowing her to communicate with viewers in the stream's chat. She was created by a computer programmer and AI developer named Vedal, who had the idea for an AI VTuber by combining a large language model with a computer-animated avatar. She debuted on Twitch on 19 December 2022.
Microsoft Copilot is a generative artificial intelligence chatbot developed by Microsoft. Based on the GPT-4 series of large language models, it was launched in 2023 as Microsoft's primary replacement for the discontinued Cortana.
Grok is a generative artificial intelligence chatbot developed by xAI. Based on the large language model (LLM) of the same name, it was launched in 2023 as an initiative by Elon Musk. The chatbot is advertised as having a "sense of humor" and direct access to X. It is currently under beta testing and is available with X Premium.