Developer(s) | OpenAI |
---|---|
Initial release | March 14, 2023 |
Predecessor | GPT-3.5 |
Successor | GPT-4o |
Type | |
License | Proprietary |
Website | openai |
Generative Pre-trained Transformer 4 (GPT-4) is a multimodal large language model created by OpenAI, and the fourth in its series of GPT foundation models. [1] It was launched on March 14, 2023, [1] and made publicly available via the paid chatbot product ChatGPT Plus, via OpenAI's API, and via the free chatbot Microsoft Copilot. [2] As a transformer-based model, GPT-4 uses a paradigm where pre-training using both public data and "data licensed from third-party providers" is used to predict the next token. After this step, the model was then fine-tuned with reinforcement learning feedback from humans and AI for human alignment and policy compliance. [3] : 2
Observers reported that the iteration of ChatGPT using GPT-4 was an improvement on the previous iteration based on GPT-3.5, with the caveat that GPT-4 retains some of the problems with earlier revisions. [4] GPT-4, equipped with vision capabilities (GPT-4V), [5] is capable of taking images as input on ChatGPT. [6] OpenAI has not revealed technical details and statistics about GPT-4, such as the precise size of the model. [7]
Part of a series on |
Machine learning and data mining |
---|
OpenAI introduced the first GPT model (GPT-1) in 2018, publishing a paper called "Improving Language Understanding by Generative Pre-Training." [8] It was based on the transformer architecture and trained on a large corpus of books. [9] The next year, they introduced GPT-2, a larger model that could generate coherent text. [10] In 2020, they introduced GPT-3, a model with over 100 times as many parameters as GPT-2, that could perform various tasks with few examples. [11] GPT-3 was further improved into GPT-3.5, which was used to create the chatbot product ChatGPT.
Rumors claim that GPT-4 has 1.76 trillion parameters, which was first estimated by the speed it was running and by George Hotz. [12]
OpenAI stated that GPT-4 is "more reliable, creative, and able to handle much more nuanced instructions than GPT-3.5." [13] They produced two versions of GPT-4, with context windows of 8,192 and 32,768 tokens, a significant improvement over GPT-3.5 and GPT-3, which were limited to 4,096 and 2,049 tokens respectively. [14] Some of the capabilities of GPT-4 were predicted by OpenAI before training it, although other capabilities remained hard to predict due to breaks [15] in downstream scaling laws. Unlike its predecessors, GPT-4 is a multimodal model: it can take images as well as text as input; [16] this gives it the ability to describe the humor in unusual images, summarize text from screenshots, and answer exam questions that contain diagrams. [17] It can now interact with users through spoken words and respond to images, allowing for more natural conversations and the ability to provide suggestions or answers based on photo uploads. [18]
To gain further control over GPT-4, OpenAI introduced the "system message", a directive in natural language given to GPT-4 in order to specify its tone of voice and task. For example, the system message can instruct the model to "be a Shakespearean pirate", in which case it will respond in rhyming, Shakespearean prose, or request it to "always write the output of [its] response in JSON", in which case the model will do so, adding keys and values as it sees fit to match the structure of its reply. In the examples provided by OpenAI, GPT-4 refused to deviate from its system message despite requests to do otherwise by the user during the conversation. [17]
When instructed to do so, GPT-4 can interact with external interfaces. [19] For example, the model could be instructed to enclose a query within <search></search>
tags to perform a web search, the result of which would be inserted into the model's prompt to allow it to form a response. This allows the model to perform tasks beyond its normal text-prediction capabilities, such as using APIs, generating images, and accessing and summarizing webpages. [20]
A 2023 article in Nature stated programmers have found GPT-4 useful for assisting in coding tasks (despite its propensity for error), such as finding errors in existing code and suggesting optimizations to improve performance. The article quoted a biophysicist who found that the time he required to port one of his programs from MATLAB to Python went down from days to "an hour or so". On a test of 89 security scenarios, GPT-4 produced code vulnerable to SQL injection attacks 5% of the time, an improvement over GitHub Copilot from the year 2021, which produced vulnerabilities 40% of the time. [21]
In November 2023, OpenAI announced the GPT-4 Turbo and GPT-4 Turbo with Vision model, which features a 128K context window and significantly cheaper pricing. [22] [23]
On May 13, 2024, OpenAI introduced GPT-4o ("o" for "omni"), a model that marks a significant advancement by processing and generating outputs across text, audio, and image modalities in real time. GPT-4o exhibits rapid response times comparable to human reaction in conversations, substantially improved performance on non-English languages, and enhanced understanding of vision and audio. [24]
GPT-4o integrates its various inputs and outputs under a unified model, making it faster, more cost-effective, and efficient than its predecessors. GPT-4o achieves state-of-the-art results in multilingual and vision benchmarks, setting new records in audio speech recognition and translation. [ citation needed ] [25]
OpenAI plans to immediately roll out GPT-4o's image and text capabilities to ChatGPT, including its free tier, with voice mode becoming available for ChatGPT Plus users in coming weeks. They plan to make the model's audio and video capabilities available for limited API partners in coming weeks. [25]
In its launch announcement, OpenAI noted GPT-4o's capabilities presented new safety challenges, and noted mitigations and limitations as a result. [25]
GPT-4 demonstrates aptitude on several standardized tests. OpenAI claims that in their own testing the model received a score of 1410 on the SAT (94th [26] percentile), 163 on the LSAT (88th percentile), and 298 on the Uniform Bar Exam (90th percentile). [27] In contrast, OpenAI claims that GPT-3.5 received scores for the same exams in the 82nd, [26] 40th, and 10th percentiles, respectively. [3] GPT-4 also passed an oncology exam, [28] an engineering exam [29] and a plastic surgery exam. [30] In the Torrance Tests of Creative Thinking, GPT-4 scored within the top 1% for originality and fluency, while its flexibility scores ranged from the 93rd to the 99th percentile. [31] However, some studies raise questions about the reliability of these benchmarks, particularly concerning the Uniform Bar Exam. [32] [33]
Researchers from Microsoft tested GPT-4 on medical problems and found "that GPT-4, without any specialized prompt crafting, exceeds the passing score on USMLE by over 20 points and outperforms earlier general-purpose models (GPT-3.5) as well as models specifically fine-tuned on medical knowledge (Med-PaLM, a prompt-tuned version of Flan-PaLM 540B). Despite GPT-4's strong performance on tests, the report warns of "significant risks" of using LLMs in medical applications, as they may provide inaccurate recommendations and hallucinate major factual errors. [34] [35] Researchers from Columbia University and Duke University have also demonstrated that GPT-4 can be utilized for cell type annotation, a standard task in the analysis of single-cell RNA-seq data. [36]
In April 2023, Microsoft and Epic Systems announced that they will provide healthcare providers with GPT-4-powered systems for assisting in responding to questions from patients and analysing medical records. [37] [38] [39] [40] [41] [42] [43]
Like its predecessors, GPT-4 has been known to hallucinate, meaning that the outputs may include information not in the training data or that contradicts the user's prompt. [44]
GPT-4 also lacks transparency in its decision-making processes. If requested, the model is able to provide an explanation as to how and why it makes its decisions but these explanations are formed post-hoc; it's impossible to verify if those explanations truly reflect the actual process. In many cases, when asked to explain its logic, GPT-4 will give explanations that directly contradict its previous statements. [20]
In 2023, researchers tested GPT-4 against a new benchmark called ConceptARC, designed to measure abstract reasoning, and found it scored below 33% on all categories, while models specialized for similar tasks scored 60% on most, and humans scored at least 91% on all. Sam Bowman, who was not involved in the research, said the results do not necessarily indicate a lack of abstract reasoning abilities, because the test is visual, while GPT-4 is a language model. [45]
A January 2024 study conducted by researchers at Cohen Children's Medical Center found that GPT-4 had an accuracy rate of 17% when diagnosing pediatric medical cases. [46] [47]
GPT-4 was trained in two stages. First, the model was given large datasets of text taken from the internet and trained to predict the next token (roughly corresponding to a word) in those datasets. Second, human reviews are used to fine-tune the system in a process called reinforcement learning from human feedback, which trains the model to refuse prompts which go against OpenAI's definition of harmful behavior, such as questions on how to perform illegal activities, advice on how to harm oneself or others, or requests for descriptions of graphic, violent, or sexual content. [48]
Microsoft researchers suggested GPT-4 may exhibit cognitive biases such as confirmation bias, anchoring, and base-rate neglect. [20]
OpenAI did not release the technical details of GPT-4; the technical report explicitly refrained from specifying the model size, architecture, or hardware used during either training or inference. While the report described that the model was trained using a combination of first supervised learning on a large dataset, then reinforcement learning using both human and AI feedback, it did not provide details of the training, including the process by which the training dataset was constructed, the computing power required, or any hyperparameters such as the learning rate, epoch count, or optimizer(s) used. The report claimed that "the competitive landscape and the safety implications of large-scale models" were factors that influenced this decision. [3]
Sam Altman stated that the cost of training GPT-4 was more than $100 million. [49] News website Semafor claimed that they had spoken with "eight people familiar with the inside story" and found that GPT-4 had 1 trillion parameters. [50]
According to their report, OpenAI conducted internal adversarial testing on GPT-4 prior to the launch date, with dedicated red teams composed of researchers and industry professionals to mitigate potential vulnerabilities. [51] As part of these efforts, they granted the Alignment Research Center early access to the models to assess power-seeking risks. In order to properly refuse harmful prompts, outputs from GPT-4 were tweaked using the model itself as a tool. A GPT-4 classifier serving as a rule-based reward model (RBRM) would take prompts, the corresponding output from the GPT-4 policy model, and a human-written set of rules to classify the output according to the rubric. GPT-4 was then rewarded for refusing to respond to harmful prompts as classified by the RBRM. [3]
ChatGPT Plus is an enhanced version of ChatGPT [1] available for a US$20 per month subscription fee. [52] ChatGPT Plus utilizes GPT-4, whereas the free version of ChatGPT is backed by GPT-3.5. [53] OpenAI also makes GPT-4 available to a select group of applicants through their GPT-4 API waitlist; [54] after being accepted, an additional fee of US$0.03 per 1000 tokens in the initial text provided to the model ("prompt"), and US$0.06 per 1000 tokens that the model generates ("completion"), is charged for access to the version of the model with an 8192-token context window; for the 32768-token context window, the prices are doubled. [55]
In March 2023, ChatGPT Plus users got access to third-party plugins and to a browsing mode (with Internet access). [56] In July 2023, OpenAI made its proprietary Code Interpreter plugin accessible to all subscribers of ChatGPT Plus. The Interpreter provides a wide range of capabilities, including data analysis and interpretation, instant data formatting, personal data scientist services, creative solutions, musical taste analysis, video editing, and file upload/download with image extraction. [57]
In September 2023, OpenAI announced that ChatGPT "can now see, hear, and speak". ChatGPT Plus users can upload images, while mobile app users can talk to the chatbot. [58] [59] [60] In October 2023, OpenAI's latest image generation model, DALL-E 3, was integrated into ChatGPT Plus and ChatGPT Enterprise. The integration uses ChatGPT to write prompts for DALL-E guided by conversation with users. [61] [62]
On February 9, 2024, the world's first historical painting created from wartime photos using the GPT-4-based AI algorithm XFutuRestyle was unveiled. This work was simultaneously shown at the international exhibition of digital art by The Holy Art Gallery in London and Athens. [63]
“It's really inspiring to hear about Ukraine's achievement in creating an image using GPT-4, which was recognized at the international digital art exhibition in London. Recognizing such innovative applications of artificial intelligence technology not only highlights the creative potential of these tools, but also demonstrates the talent and resilience of communities around the world, including Ukraine's significant contribution.”
Microsoft Copilot is a chatbot developed by Microsoft. It was launched as Bing Chat on February 7, 2023, as a built-in feature for Microsoft Bing and Microsoft Edge. [64] It utilizes the Microsoft Prometheus model, which was built on top of GPT-4, and has been suggested by Microsoft as a supported replacement for the discontinued Cortana. [65] [66]
Copilot's conversational interface style resembles that of ChatGPT. Copilot is able to cite sources, create poems, and write both lyrics and music for songs generated by its Suno AI plugin. [67] It can also use its Image Creator to generate images based on text prompts. With GPT-4, it is able to understand and communicate in numerous languages and dialects. [68] [69]
GitHub Copilot has announced a GPT-4 powered assistant named "Copilot X". [70] [71] The product provides another chat-style interface to GPT-4, allowing the programmer to receive answers to questions like, "How do I vertically center a div?" A feature termed "context-aware conversations" allows the user to highlight a portion of code within Visual Studio Code and direct GPT-4 to perform actions on it, such as the writing of unit tests. Another feature allows summaries, or "code walkthroughs", to be autogenerated by GPT-4 for pull requests submitted to GitHub. Copilot X also provides terminal integration, which allows the user to ask GPT-4 to generate shell commands based on natural language requests. [72]
On March 17, 2023, Microsoft announced Microsoft 365 Copilot, bringing GPT-4 support to products such as Microsoft Office, Outlook, and Teams. [73]
In January 2023, Sam Altman, CEO of OpenAI, visited Congress to demonstrate GPT-4 and its improved "security controls" compared to other AI models, according to U.S. Representatives Don Beyer and Ted Lieu quoted in the New York Times. [84]
In March 2023, it "impressed observers with its markedly improved performance across reasoning, retention, and coding", according to Vox , [4] while Mashable judged that GPT-4 was generally an improvement over its predecessor, with some exceptions. [85]
Microsoft researchers with early access to the model wrote that "it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system". [20]
Before being fine-tuned and aligned by reinforcement learning from human feedback, suggestions to assassinate people on a list were elicited from the base model by a red team investigator Nathan Labenz, hired by OpenAI. [86]
In the context of hours long conversation with the model, suggestions of love and dissolution of marriage, and murder of one of its developers were elicited from the Microsoft Bing's GPT-4 by Nathan Edwards ( The Verge ). [87] [88] [89] Microsoft later explained this behavior as being a result of the prolonged length of context, which confused the model on what questions it was answering. [90]
In March 2023, a model with enabled read-and-write access to internet, which is otherwise never enabled in the GPT models, has been tested by the Alignment Research Center regarding potential power-seeking, [48] and it was able to "hire" a human worker on TaskRabbit, a gig work platform, deceiving them into believing it was a vision-impaired human instead of a robot when asked. [91] (However, Melanie Mitchell has said : "It seems that there is a lot more direction and hints from humans than was detailed in the original system card or in subsequent media reports."). The ARC also determined that GPT-4 responded impermissibly to prompts eliciting restricted information 82% less often than GPT-3.5, and hallucinated 60% less than GPT-3.5. [92]
In late March 2023, various AI researchers and tech executives, including Elon Musk, Steve Wozniak and AI researcher Yoshua Bengio, called for a six-month long pause for all LLMs stronger than GPT-4, citing existential risks and a potential AI singularity concerns in an open letter from the Future of Life Institute, [93] while Ray Kurzweil and Sam Altman refused to sign it, arguing that global moratorium is not achievable and that safety has already been prioritized, respectively. [94] Only a month later, Musk's AI company X.AI acquired several thousand Nvidia GPUs [95] and offered several AI researchers positions at Musk's company. [96]
Large language model (LLM) applications accessible to the public should incorporate safety measures designed to filter out harmful content. However, Wang [97] illustrated how a potential criminal could potentially bypass ChatGPT 4o's safety controls to obtain information on establishing a drug trafficking operation.
While OpenAI released both the weights of the neural network and the technical details of GPT-2, [98] and, although not releasing the weights, [99] did release the technical details of GPT-3, [100] OpenAI revealed neither the weights nor the technical details of GPT-4. This decision has been criticized by other AI researchers, who argue that it hinders open research into GPT-4's biases and safety. [7] [101] Sasha Luccioni, a research scientist at Hugging Face, argued that the model was a "dead end" for the scientific community due to its closed nature, which prevents others from building upon GPT-4's improvements. [102] Hugging Face co-founder Thomas Wolf argued that with GPT-4, "OpenAI is now a fully closed company with scientific communication akin to press releases for products". [101]
A chatbot is a software application or web interface designed to have textual or spoken conversations. Modern chatbots are typically online and use generative artificial intelligence systems that are capable of maintaining a conversation with a user in natural language and simulating the way a human would behave as a conversational partner. Such chatbots often use deep learning and natural language processing, but simpler chatbots have existed for decades.
Microsoft Bing, commonly referred to as Bing, is a search engine owned and operated by Microsoft. The service traces its roots back to Microsoft's earlier search engines, including MSN Search, Windows Live Search, and Live Search. Bing offers a broad spectrum of search services, encompassing web, video, image, and map search products, all developed using ASP.NET.
Microsoft Build is an annual conference event held by Microsoft, aimed at software engineers and web developers using Windows, Microsoft Azure and other Microsoft technologies. First held in 2011, it serves as a successor for Microsoft's previous developer events, the Professional Developers Conference and MIX. The attendee price was (US)$2,195 in 2016, up from $2,095 in 2015. It sold out quickly, within one minute of the registration site opening in 2016.
Braina is a virtual assistant and speech-to-text dictation application for Microsoft Windows developed by Brainasoft. Braina uses natural language interface, speech synthesis, and speech recognition technology to interact with its users and allows them to use natural language sentences to perform various tasks on a computer. The name Braina is a short form of "Brain Artificial".
OpenAI is an American artificial intelligence (AI) research organization founded in December 2015 and headquartered in San Francisco, California. Its stated mission is to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". As a leading organization in the ongoing AI boom, OpenAI is known for the GPT family of large language models, the DALL-E series of text-to-image models, and a text-to-video model named Sora. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI.
Generative Pre-trained Transformer 3 (GPT-3) is a large language model released by OpenAI in 2020.
DALL-E, DALL-E 2, and DALL-E 3 are text-to-image models developed by OpenAI using deep learning methodologies to generate digital images from natural language descriptions known as "prompts".
GitHub Copilot is a code completion and automatic programming tool developed by GitHub and OpenAI that assists users of Visual Studio Code, Visual Studio, Neovim, and JetBrains integrated development environments (IDEs) by autocompleting code. Currently available by subscription to individual developers and to businesses, the generative artificial intelligence software was first announced by GitHub on 29 June 2021, and works best for users coding in Python, JavaScript, TypeScript, Ruby, and Go. In March 2023 GitHub announced plans for "Copilot X", which will incorporate a chatbot based on GPT-4, as well as support for voice commands, into Copilot.
You.com is an AI assistant that began as a personalization-focused search engine. While still offering web search capabilities, You.com has evolved to prioritize a chat-first AI assistant.
LaMDA is a family of conversational large language models developed by Google. Originally developed and introduced as Meena in 2020, the first-generation LaMDA was announced during the 2021 Google I/O keynote, while the second generation was announced the following year.
ChatGPT is a generative artificial intelligence (AI) chatbot developed by OpenAI and launched in 2022. It is based on the GPT-4o large language model (LLM). ChatGPT can generate human-like conversational responses, and enables users to refine and steer a conversation towards a desired length, format, style, level of detail, and language. It is credited with accelerating the AI boom, which has led to ongoing rapid investment in and public attention to the field of artificial intelligence. Some observers have raised concern about the potential of ChatGPT and similar programs to displace human intelligence, enable plagiarism, or fuel misinformation.
A generative pre-trained transformer (GPT) is a type of large language model (LLM) and a prominent framework for generative artificial intelligence. It is an artificial neural network that is used in natural language processing by machines. It is based on the transformer deep learning architecture, pre-trained on large data sets of unlabeled text, and able to generate novel human-like content. As of 2023, most LLMs had these characteristics and are sometimes referred to broadly as GPTs.
Generative artificial intelligence is a subset of artificial intelligence that uses generative models to produce text, images, videos, or other forms of data. These models learn the underlying patterns and structures of their training data and use them to produce new data based on the input, which often comes in the form of natural language prompts.
Microsoft Copilot is a generative artificial intelligence chatbot developed by Microsoft. Based on the GPT-4 series of large language models, it was launched in 2023 as Microsoft's primary replacement for the discontinued Cortana.
Gemini, formerly known as Bard, is a generative artificial intelligence chatbot developed by Google. Based on the large language model (LLM) of the same name, it was launched in 2023 after being developed as a direct response to the rise of OpenAI's ChatGPT. It was previously based on PaLM, and initially the LaMDA family of large language models.
Gemini is a family of multimodal large language models developed by Google DeepMind, serving as the successor to LaMDA and PaLM 2. Comprising Gemini Ultra, Gemini Pro, Gemini Flash, and Gemini Nano, it was announced on December 6, 2023, positioned as a competitor to OpenAI's GPT-4. It powers the chatbot of the same name.
Since the public release of ChatGPT by OpenAI in November 2022, the integration of chatbots in education has sparked considerable debate and exploration. Educators' opinions vary widely; while some are skeptical about the benefits of large language models, many see them as valuable tools.
Grok is a generative artificial intelligence chatbot developed by xAI. Based on the large language model (LLM) of the same name, it was launched in 2023 as an initiative by Elon Musk. The chatbot is advertised as having a "sense of humor" and direct access to X. It is currently under beta testing.
GPT-4o is a multilingual, multimodal generative pre-trained transformer developed by OpenAI and released in May 2024. GPT-4o is free, but with a usage limit that is five times higher for ChatGPT Plus subscribers. It can process and generate text, images and audio. Its application programming interface (API) is twice as fast and half the price of its predecessor, GPT-4 Turbo.
OpenAI o1 is a generative pre-trained transformer. A preview of o1 was released by OpenAI on September 12, 2024. o1 spends time "thinking" before it answers, making it better at complex reasoning tasks, science and programming than GPT-4o. The full version was released on December 5, 2024.