Company type | Division |
---|---|
Industry | Artificial intelligence |
Founded | December 11, 2015 |
Founders | |
Headquarters | Astor Place, New York City, New York, U.S. |
Products | LLaMA |
Owner | Meta Platforms |
Website | ai |
This article is part of a series about |
Meta Platforms |
---|
Products and services |
People |
Business |
Part of a series on |
Artificial intelligence |
---|
Meta AI is an American company owned by Meta (formerly Facebook) that develops artificial intelligence and augmented and artificial reality technologies. Meta AI deems itself an academic research laboratory, focused on generating knowledge for the AI community, and should not be confused with Meta's Applied Machine Learning (AML) team, which focuses on the practical applications of its products.
The laboratory was founded as Facebook Artificial Intelligence Research (FAIR) with locations at the headquarters in Menlo Park, California, London, United Kingdom, and a new laboratory in Manhattan. FAIR was officially announced in September 2013. [1] FAIR was first directed by New York University's Yann LeCun, a deep learning professor and Turing Award winner. [2] Working with NYU's Center for Data Science, FAIR's initial goal was to research data science, machine learning, and artificial intelligence and to "understand intelligence, to discover its fundamental principles, and to make machines significantly more intelligent". [3] Research at FAIR pioneered the technology that led to face recognition, tagging in photographs, and personalized feed recommendation. [4] Vladimir Vapnik, a pioneer in statistical learning, joined FAIR [5] in 2014. Vapnik is the co-inventor of the support-vector machine and one of the developers of the Vapnik–Chervonenkis theory.
FAIR opened a research center in Paris, France in 2015, [6] and subsequently launched smaller satellite research labs in Seattle, Pittsburgh, Tel Aviv, Montreal and London. [7] In 2016, FAIR partnered with Google, Amazon, IBM, and Microsoft in creating the Partnership on Artificial Intelligence to Benefit People and Society, an organization with a focus on open licensed research, supporting ethical and efficient research practices, and discussing fairness, inclusivity, and transparency.
In 2018, Jérôme Pesenti, former CTO of IBM's big data group, assumed the role of president of FAIR, while LeCun stepped down to serve as chief AI scientist. [8] In 2018, FAIR was placed 25th in the AI Research Rankings 2019, which ranked the top global organizations leading AI research. [9] FAIR quickly rose to eighth position in 2019, [10] and maintained eighth position in the 2020 rank. [11] FAIR had approximately 200 staff in 2018, and had the goal to double that number by 2020. [12]
FAIR's initial work included research in learning-model enabled memory networks, self-supervised learning and generative adversarial networks, text classification and translation, as well as computer vision. [3] FAIR released Torch deep-learning modules as well as PyTorch in 2017, an open-source machine learning framework, [3] which was subsequently used in several deep learning technologies, such as Tesla's autopilot [13] and Uber's Pyro. [14] Also in 2017, FAIR discontinued a research project once AI bots developed a language that was unintelligible to humans, [15] inciting conversations about dystopian fear of artificial intelligence going out of control. [16] However, FAIR clarified that the research had been shut down because they had accomplished their initial goal to understand how languages are generated, rather than out of fear. [15]
FAIR was renamed Meta AI following the rebranding that changed Facebook, Inc. to Meta Platforms Inc. [17]
In 2022, Meta AI predicted the 3D shape of 600 million potential proteins in two weeks. [18]
Artificial intelligence communication requires a machine to understand natural language and to generate language that is natural. Meta AI seeks to improve these technologies to improve safe communication regardless of what language the user might speak. [19] Thus, a central task involves the generalization of natural language processing (NLP) technology to other languages. As such, Meta AI actively works on unsupervised machine translation. [20] [21] Meta AI seeks to improve natural-language interfaces by developing aspects of chitchat dialogue such as repetition, specificity, response-relatedness and question-asking, [22] incorporating personality into image captioning, [23] and generating creativity-based language. [24]
In November 2022, a large language model designed for generating scientific text, Galactica, was released. [25] Meta withdrew Galactica on 17 November due to offensiveness and inaccuracy. [26] Before the cancellation, researchers were working on Galactica Instruct, which would use instruction tuning to allow the model to follow instructions to manipulate LaTeX documents on Overleaf. [27]
In February 2023, Meta AI launched LLaMA (Large Language Model Meta AI), a large language model ranging from 7B to 65B parameters.[ citation needed ]
Until 2022, Meta AI mainly used CPU and in-house custom chip as hardware, before finally switching to Nvidia GPU. This necessitated a complete redesign of several data centers, since they needed 24 to 32 times the networking capacity and new liquid cooling systems. [28]
The MTIA v1 is Meta's first-generation AI training and inference accelerator, developed specifically for Meta's recommendation workloads. It was fabricated using TSMC's 7 nm process technology and operates at a frequency of 800 MHz. In terms of processing power, the accelerator provides 102.4 TOPS at INT8 precision and 51.2 TFLOPS at FP16 precision, while maintaining a thermal design power (TDP) of 25 W. [29]
Meta AI offers options for users to customize their interaction with its features. Users are able to mute the AI chatbot on platforms like Facebook, Instagram, and WhatsApp, [30] temporarily halting notifications from the chatbot. Some platforms also offer the ability to hide certain AI elements from their interface. To locate the relevant settings, users can consult the platform's help documentation or settings menu.
Concerns
Since May 2024, the Meta AI chatbot has summarized news from various outlets without linking directly to original articles, including in Canada, where news links are banned on its platforms. This use of news content without compensation has raised ethical and legal concerns, especially as Meta continues to reduce news visibility on its platforms. [31]
Meta AI was pre-installed on the second generation of Ray-Ban Meta Smart Glasses on September 27, 2023 as a voice assistant. [32] On April 23, 2024, Meta announced an update to Meta AI on the smart glasses to enable multimodal input via Computer vision. [33] On July 23, 2024, Meta announced that Meta AI with Vision would be incorporated into the Meta Quest 3 for detection of physical objects in passthrough mode, replacing the older voice assistant software in the Quest OS. [34]
Jürgen Schmidhuber is a German computer scientist noted for his work in the field of artificial intelligence, specifically artificial neural networks. He is a scientific director of the Dalle Molle Institute for Artificial Intelligence Research in Switzerland. He is also director of the Artificial Intelligence Initiative and professor of the Computer Science program in the Computer, Electrical, and Mathematical Sciences and Engineering (CEMSE) division at the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia.
Yann André LeCun is a French-American computer scientist working primarily in the fields of machine learning, computer vision, mobile robotics and computational neuroscience. He is the Silver Professor of the Courant Institute of Mathematical Sciences at New York University and Vice-President, Chief AI Scientist at Meta.
Deep learning is a subset of machine learning methods based on neural networks with representation learning. The field takes inspiration from biological neuroscience and is centered around stacking artificial neurons into layers and "training" them to process data. The adjective "deep" refers to the use of multiple layers in the network. Methods used can be either supervised, semi-supervised or unsupervised.
Maluuba is a Canadian technology company conducting research in artificial intelligence and language understanding. Founded in 2011, the company was acquired by Microsoft in 2017.
In the field of artificial intelligence (AI), AI alignment aims to steer AI systems toward a person's or group's intended goals, preferences, and ethical principles. An AI system is considered aligned if it advances the intended objectives. A misaligned AI system pursues unintended objectives.
Wojciech Zaremba is a Polish computer scientist, a founding team member of OpenAI (2016–present), where he leads both the Codex research and language teams. The teams actively work on AI that writes computer code and creating successors to GPT-3 respectively.
Explainable AI (XAI), often overlapping with interpretable AI, or explainable machine learning (XML), either refers to an artificial intelligence (AI) system over which it is possible for humans to retain intellectual oversight, or refers to the methods to achieve this. The main focus is usually on the reasoning behind the decisions or predictions made by the AI which are made more understandable and transparent. XAI counters the "black box" tendency of machine learning, where even the AI's designers cannot explain why it arrived at a specific decision.
fastText is a library for learning of word embeddings and text classification created by Facebook's AI Research (FAIR) lab. The model allows one to create an unsupervised learning or supervised learning algorithm for obtaining vector representations for words. Facebook makes available pretrained models for 294 languages. Several papers describe the techniques used by fastText.
Joëlle Pineau is a Canadian computer scientist and Associate Professor at McGill University. She is the global Vice President of Facebook Artificial Intelligence Research (FAIR), now known as Meta AI, and is based in Montreal, Quebec. She was elected to the Fellow of the Royal Society of Canada in 2023.
Tomáš Mikolov is a Czech computer scientist working in the field of machine learning. In March 2020, Mikolov became a senior research scientist at the Czech Institute of Informatics, Robotics and Cybernetics.
Prompt engineering is the process of structuring an instruction that can be interpreted and understood by a generative AI model. A prompt is natural language text describing the task that an AI should perform: a prompt for a text-to-text language model can be a query such as "what is Fermat's little theorem?", a command such as "write a poem about leaves falling", or a longer statement including context, instructions, and conversation history. Prompt engineering may involve phrasing a query, specifying a style, providing relevant context or assigning a role to the AI such as "Act as a native French speaker". A prompt may include a few examples for a model to learn from, such as asking the model to complete "maison → house, chat → cat, chien →", an approach called few-shot learning.
A text-to-video model is a machine learning model that uses a natural language description as input to produce a video relevant to the input text. Advancements during the 2020s in the generation of high-quality, text-conditioned videos have largely been driven by the development of video diffusion models.
AI safety is an interdisciplinary field focused on preventing accidents, misuse, or other harmful consequences arising from artificial intelligence (AI) systems. It encompasses machine ethics and AI alignment, which aim to ensure AI systems are moral and beneficial, as well as monitoring AI systems for risks and enhancing their reliability. The field is particularly concerned with existential risks posed by advanced AI models.
In the field of artificial intelligence (AI), a hallucination or artificial hallucination is a response generated by AI which contains false or misleading information presented as fact. This term draws a loose analogy with human psychology, where hallucination typically involves false percepts. However, there is a key difference: AI hallucination is associated with erroneous responses rather than perceptual experiences.
Generative pre-trained transformers (GPTs) are a type of large language model (LLM) and a prominent framework for generative artificial intelligence. They are artificial neural networks that are used in natural language processing tasks. GPTs are based on the transformer architecture, pre-trained on large data sets of unlabelled text, and able to generate novel human-like content. As of 2023, most LLMs have these characteristics and are sometimes referred to broadly as GPTs.
EleutherAI is a grass-roots non-profit artificial intelligence (AI) research group. The group, considered an open-source version of OpenAI, was formed in a Discord server in July 2020 by Connor Leahy, Sid Black, and Leo Gao to organize a replication of GPT-3. In early 2023, it formally incorporated as the EleutherAI Institute, a non-profit research institute.
A large language model (LLM) is a computational model capable of language generation or other natural language processing tasks. As language models, LLMs acquire these abilities by learning statistical relationships from vast amounts of text during a self-supervised and semi-supervised training process.
The Pile is an 886.03 GB diverse, open-source dataset of English text created as a training dataset for large language models (LLMs). It was constructed by EleutherAI in 2020 and publicly released on December 31 of that year. It is composed of 22 smaller datasets, including 14 new ones.
Llama is a family of autoregressive large language models (LLMs) released by Meta AI starting in February 2023. The latest version is Llama 3.1, released in July 2024.
Rob Fergus is an American computer scientist working primarily in the fields of machine learning, deep learning, representational learning, and generative models. He is a professor of computer science at Courant Institute of Mathematical Sciences at New York University (NYU) and a research scientist at DeepMind. He co-founded Meta AI along with Yann Le Cun in September 2013. In 2009, Rob Fergus co-founded the Computational Intelligence, Learning, Vision, and Robotics (CILVR) Lab at NYU along with Yann Le Cun.