Meta AI

Last updated
Meta AI
Industry Artificial intelligence
FoundedDecember 11, 2015;8 years ago (2015-12-11)
Founders
Headquarters Astor Place, New York City, New York, U.S.
Owner Meta Platforms
Website ai.meta.com

Meta AI is an artificial intelligence laboratory owned by Meta Platforms Inc. (formerly known as Facebook, Inc.). Meta AI develops various forms of artificial intelligence, including augmented and artificial reality technologies. Meta AI is also an academic research laboratory focused on generating knowledge for the AI community. This is in contrast to Facebook's Applied Machine Learning (AML) team, which focuses on practical applications of its products.

Contents

History

The laboratory started as Facebook Artificial Intelligence Research (FAIR) with locations in the Menlo Park, California, headquarters, London, United Kingdom, and a new laboratory in Manhattan. FAIR was officially announced in September 2013. [1] FAIR was directed by New York University's Yann LeCun, a deep learning Professor and Turing Award winner. [2] Working with NYU's Center for Data Science, FAIR's initial goal was to research data science, machine learning, and artificial intelligence and to "understand intelligence, to discover its fundamental principles, and to make machines significantly more intelligent". [3] Research at FAIR pioneered the technology that led to face recognition, tagging in photographs, and personalized feed recommendation. [4] Vladimir Vapnik, a pioneer in statistical learning, joined FAIR [5] in 2014, he is the co-inventor of the support-vector machine, and one of the developers of the Vapnik–Chervonenkis theory.

FAIR opened a research center in Paris, France in 2015, [6] and subsequently launched smaller satellite research labs in Seattle, Pittsburgh, Tel Aviv, Montreal and London. [7] In 2016, FAIR partnered with Google, Amazon, IBM, and Microsoft in creating the Partnership on Artificial Intelligence to Benefit People and Society, an organization with a focus on open licensed research, supporting ethical and efficient research practices, and discussing fairness, inclusivity, and transparency.

In 2018, Jérôme Pesenti, former CTO of IBM's big data group, assumed the role of president of FAIR, while LeCun stepped down to serve as chief AI scientist. [8] In 2018, FAIR was placed 25th in the AI Research Rankings 2019, which ranked the top global organizations leading AI research. [9] FAIR quickly rose to eighth position in 2019, [10] and maintained eighth position in the 2020 rank. [11] FAIR had approximately 200 staff in 2018, and had the goal to double that number by 2020. [12]

FAIR's initial work included research in learning-model enabled memory networks, self-supervised learning and generative adversarial networks, text classification and translation, as well as computer vision. [3] FAIR released Torch deep-learning modules as well as PyTorch in 2017, an open-source machine learning framework, [3] which was subsequently used in several deep learning technologies, such as Tesla's autopilot [13] and Uber's Pyro. [14] Also in 2017, FAIR discontinued a research project once AI bots developed a language that was unintelligible to humans, [15] inciting conversations about dystopian fear of artificial intelligence going out of control. [16] However, FAIR clarified that the research had been shut down because they had accomplished their initial goal to understand how languages are generated, rather than out of fear. [15]

FAIR was renamed Meta AI following the rebranding that changed Facebook, Inc. to Meta Platforms Inc. [17]

In 2022, Meta AI predicted the 3D shape of 600 million of potential proteins in two weeks. [18]

Current research

In the February 23, 2022, live event Inside the Lab: Building for the Metaverse with AI, the Meta AI team discussed the major advancements in research and development in artificial intelligence. [19] One such tool is the BuilderBot, which allows users to generate virtual worlds by using voice commands. Other tools include the No Language Left Behind, a system capable of automatic translation between written languages, and a Universal Speech Translator, a system capable of instantaneous speech-to-speech translation.

Computer vision

Meta AI's computer vision research aims to extract information about the environment from digital images and videos. [20] One example of computer vision technology developed by AI is panoptic segmentation, which recognizes objects in the foreground but also classifies the scenes in the background. [21] Meta AI seeks to improve Visual Question Answering technology, in which a machine answers human user questions about images using cycle-consistency, having the machine generate a question in addition to the answer to address linguistic variations in the questions. [22]

Natural language processing and conversational AI

Artificial intelligence communication requires a machine to understand natural language and to generate language that is natural. Meta AI seeks to improve these technologies to improve safe communication regardless of what language the user might speak. [23] Thus, a central task involves the generalization of natural language processing (NLP) technology to other languages. As such, Meta AI actively works on unsupervised machine translation. [24] [25] Meta AI seeks to improve natural-language interfaces by developing aspects of chitchat dialogue such as repetition, specificity, response-relatedness and question-asking, [26] incorporating personality into image captioning, [27] and generating creativity-based language. [28]

In 2018, Meta AI launched the open-source PyText, a modeling framework focused on NLP systems. [29]

In 2023, Meta AI announced and open sourced LLaMA (Large Language Model Meta AI), a 65B parameter large language model. [30]

Ranking and recommendations

Facebook and Instagram use Meta AI research in ranking & recommendations in their newsfeeds, ads, and search results. [31] Meta AI has also introduced ReAgent, a toolset that generates decisions and evaluates user feedback. [32]

Systems research

Machine learning and AI depend on the development of novel algorithms, software, and hardware technologies. As such, Meta AI's systems research teams study computer languages, compilers, and hardware applications. [33]

Theory

Meta AI studies the mathematical and theoretical foundations of artificial intelligence. Meta AI has publications in learning theory, optimization, and signal processing. [34]

Hardware

The MTIA v1 is Meta's first-generation AI training and inference accelerator, developed specifically for Meta's recommendation workloads. It was fabricated using TSMC's 7 nm process technology and operates at a frequency of 800 MHz. In terms of processing power, the accelerator provides 102.4 TOPS at INT8 precision and 51.2 TFLOPS at FP16 precision, while maintaining a thermal design power (TDP) of 25 W. [35] [36] [37]

The accelerator is structured around a grid of 64 processing elements (PEs), arranged in an 8x8 configuration, and it is furnished with on-chip and off-chip memory resources along with the necessary interconnects. Each PE houses two processor cores (one with a vector extension) and several fixed-function units optimized for tasks such as matrix multiplication, accumulation, data movement, and nonlinear function calculation. The processor cores utilize the RISC-V open instruction set architecture (ISA), with extensive customization to perform the required compute and control tasks.

The accelerator's memory subsystem uses LPDDR5 for off-chip DRAM resources and can be scaled up to 128 GB. Additionally, it possesses 128 MB of on-chip SRAM that is shared amongst all the PEs for faster access to frequently used data and instructions. The design encourages parallelism and data reuse, offering thread and data-level parallelism (TLP and DLP), instruction-level parallelism (ILP), and memory-level parallelism (MLP).

MTIA accelerators are mounted on compact dual M.2 boards, enabling easier integration into a server. The boards connect to the host CPU via PCIe Gen4 x8 links and have a power consumption as low as 35 W. The servers hosting these accelerators utilize the Yosemite V3 server specification from the Open Compute Project. Each server houses 12 accelerators, interconnected through a hierarchy of PCIe switches, allowing workloads to be distributed across multiple accelerators and executed concurrently.

User Controls

Meta AI offers limited options for users to customize their interaction with its features. Users may be able to mute the AI chatbot on platforms like Facebook, Instagram, and WhatsApp. People aggressively searching turn off meta AI [38] on varies app. This will temporarily stop notifications from the AI. Some platforms may also offer the ability to hide certain AI elements from their interface. To locate the relevant settings, users may search within the platform's help documentation or settings menu.

Related Research Articles

<span class="mw-page-title-main">Jürgen Schmidhuber</span> German computer scientist

Jürgen Schmidhuber is a German computer scientist noted for his work in the field of artificial intelligence, specifically artificial neural networks. He is a scientific director of the Dalle Molle Institute for Artificial Intelligence Research in Switzerland. He is also director of the Artificial Intelligence Initiative and professor of the Computer Science program in the Computer, Electrical, and Mathematical Sciences and Engineering (CEMSE) division at the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia.

The ethics of artificial intelligence is the branch of the ethics of technology specific to artificial intelligence (AI) systems.

<span class="mw-page-title-main">Yann LeCun</span> French computer scientist (born 1960)

Yann André LeCun is a Turing Award winning French-American computer scientist working primarily in the fields of machine learning, computer vision, mobile robotics and computational neuroscience. He is the Silver Professor of the Courant Institute of Mathematical Sciences at New York University and Vice-President, Chief AI Scientist at Meta.

<span class="mw-page-title-main">Deep learning</span> Branch of machine learning

Deep learning is the subset of machine learning methods based on neural networks with representation learning. The adjective "deep" refers to the use of multiple layers in the network. Methods used can be either supervised, semi-supervised or unsupervised.

<span class="mw-page-title-main">Maluuba</span> Canadian technology company

Maluuba is a Canadian technology company conducting research in artificial intelligence and language understanding. Founded in 2011, the company was acquired by Microsoft in 2017.

Google Brain was a deep learning artificial intelligence research team under the umbrella of Google AI, a research division at Google dedicated to artificial intelligence. Formed in 2011, it combined open-ended machine learning research with information systems and large-scale computing resources. It created tools such as TensorFlow, which allow neural networks to be used by the public, and multiple internal AI research projects, and aimed to create research opportunities in machine learning and natural language processing. It was merged into former Google sister company DeepMind to form Google DeepMind in April 2023.

<span class="mw-page-title-main">Eric Xing</span>

Eric Poe Xing is an American computer scientist whose research spans machine learning, computational biology, and statistical methodology. Xing is founding President of the world’s first artificial intelligence university, Mohamed bin Zayed University of Artificial Intelligence (MBZUAI).

Wojciech Zaremba is a Polish computer scientist, a founding team member of OpenAI (2016–present), where he leads both the Codex research and language teams. The teams actively work on AI that writes computer code and creating successors to GPT-3 respectively.

Explainable AI (XAI), often overlapping with Interpretable AI, or Explainable Machine Learning (XML), either refers to an artificial intelligence (AI) system over which it is possible for humans to retain intellectual oversight, or refers to the methods to achieve this. The main focus is usually on the reasoning behind the decisions or predictions made by the AI which are made more understandable and transparent. XAI counters the "black box" tendency of machine learning, where even the AI's designers cannot explain why it arrived at a specific decision.

<span class="mw-page-title-main">Graphcore</span> British semiconductor company

Graphcore Limited is a British semiconductor company that develops accelerators for AI and machine learning. It has introduced a massively parallel Intelligence Processing Unit (IPU) that holds the complete machine learning model inside the processor.

fastText is a library for learning of word embeddings and text classification created by Facebook's AI Research (FAIR) lab. The model allows one to create an unsupervised learning or supervised learning algorithm for obtaining vector representations for words. Facebook makes available pretrained models for 294 languages. Several papers describe the techniques used by fastText.

<span class="mw-page-title-main">Tsetlin machine</span> Artificial intelligence algorithm

A Tsetlin machine is an Artificial Intelligence algorithm based on propositional logic.

<span class="mw-page-title-main">Joëlle Pineau</span> Canadian computer scientist (born 1974)

Joëlle Pineau is a Canadian computer scientist and Associate Professor at McGill University. She is the global Vice President of Facebook Artificial Intelligence Research (FAIR), now known as Meta AI, and is based in Montreal, Quebec. She was elected to the Fellow of the Royal Society of Canada in 2023.

<span class="mw-page-title-main">Tomáš Mikolov</span> Czech computer scientist

Tomáš Mikolov is a Czech computer scientist working in the field of machine learning. In March 2020, Mikolov became a senior research scientist at the Czech Institute of Informatics, Robotics and Cybernetics.

<span class="mw-page-title-main">Mila (research institute)</span> Research laboratory in Montreal, Canada

Mila - Quebec AI Institute is a research institute in Montreal, Quebec, focusing mainly on machine learning research. Approximately 1000 students and researchers and 100 faculty members, were part of Mila in 2022. Along with Alberta's Amii and Toronto's Vector Institute, Mila is part of the Pan-Canadian Artificial Intelligence Strategy.

Neuro-symbolic AI is a type of artificial intelligence that integrates neural and symbolic AI architectures to address the weaknesses of each, providing a robust AI capable of reasoning, learning, and cognitive modeling. As argued by Leslie Valiant and others, the effective construction of rich computational cognitive models demands the combination of symbolic reasoning and efficient machine learning. Gary Marcus, argued, "We cannot construct rich cognitive models in an adequate, automated way without the triumvirate of hybrid architecture, rich prior knowledge, and sophisticated techniques for reasoning." Further, "To build a robust, knowledge-driven approach to AI we must have the machinery of symbol manipulation in our toolkit. Too much useful knowledge is abstract to proceed without tools that represent and manipulate abstraction, and to date, the only known machinery that can manipulate such abstract knowledge reliably is the apparatus of symbol manipulation."

<span class="mw-page-title-main">Chelsea Finn</span> American computer scientist and academic

Chelsea Finn is an American computer scientist and assistant professor at Stanford University. Her research investigates intelligence through the interactions of robots, with the hope to create robotic systems that can learn how to learn. She is part of the Google Brain group.

A text-to-video model is a machine learning model which takes a natural language description as input and producing a video or multiples videos from the input.

AI safety is an interdisciplinary field concerned with preventing accidents, misuse, or other harmful consequences that could result from artificial intelligence (AI) systems. It encompasses machine ethics and AI alignment, monitoring AI systems for risks and making them highly reliable. Beyond AI research, it involves developing norms and policies that promote safety.

<span class="mw-page-title-main">Generative artificial intelligence</span> AI system capable of generating content in response to prompts

Generative artificial intelligence is artificial intelligence capable of generating text, images, videos, or other data using generative models, often in response to prompts. Generative AI models learn the patterns and structure of their input training data and then generate new data that has similar characteristics.

References

  1. "NYU "Deep Learning" Professor LeCun Will Head Facebook's New Artificial Intelligence Lab". TechCrunch. 9 December 2013. Retrieved 2022-05-08.
  2. "Yann LeCun - A.M. Turing Award Laureate". amturing.acm.org. Retrieved 2022-05-08.
  3. 1 2 3 "FAIR turns five: What we've accomplished and where we're headed". Engineering at Meta. 2018-12-05. Retrieved 2022-05-08.
  4. Metz, Cade (December 12, 2013). "Facebook's 'Deep Learning' Guru Reveals the Future of AI". Wired Business. Retrieved May 7, 2022.
  5. "Facebook's AI team hires Vladimir Vapnik, father of the popular support vector machine algorithm". VentureBeat. 2014-11-25. Retrieved 2022-05-08.
  6. Dillet, Romain (June 2, 2015). "Facebook Opens New AI Research Center in Paris". TechCrunch. Retrieved May 7, 2022.
  7. "Facebook Opens New AI Research Center In Paris". TechCrunch. 2 June 2015. Retrieved 2022-05-08.
  8. Dave, Greshgorn (January 23, 2018). "The head of Facebook's AI research is stepping into a new role as it shakes up management". Quartz. Retrieved May 7, 2022.
  9. Chuvpilo, Gleb (2021-05-19). "Who's Ahead in AI Research? Insights from NIPS, Most Prestigious AI Conference". Medium. Retrieved 2022-05-08.
  10. Chuvpilo, Gleb (2021-05-19). "AI Research Rankings 2019: Insights from NeurIPS and ICML, Leading AI Conferences". Medium. Retrieved 2022-05-08.
  11. Chuvpilo, Gleb (2021-05-19). "AI Research Rankings 2020: Can the United States Stay Ahead of China?". Medium. Retrieved 2022-05-08.
  12. Shead, Sam. "Facebook Plans To Double Size Of AI Research Unit By 2020". Forbes. Retrieved 2022-05-08.
  13. Karpathy, Andrej. "PyTorch at Tesla - Andrej Karpathy, Tesla". YouTube .
  14. "Pyro". pyro.ai. Retrieved 2022-05-08.
  15. 1 2 "Facebook researchers shut down AI bots that started speaking in a language unintelligible to humans". Tech2. 2017-07-31. Retrieved 2022-05-08.
  16. Magid, Larry. "Dystopian Fear Of Facebook's AI Experiment Is Highly Exaggerated". Forbes. Retrieved 2022-05-08.
  17. Murphy Kelly, Samantha (October 29, 2021). "Facebook changes its company name to Meta". CNN Business. Retrieved May 7, 2022.
  18. "Meta's new AI just predicted the shape of 600 million proteins in 2 weeks". Live Science. November 4, 2022.
  19. "Inside the Lab: Building for the Metaverse With AI". Meta. 2022-02-23. Retrieved 2022-05-08.
  20. "Meta AI Research Topic - Computer Vision". ai.facebook.com. Retrieved 2022-05-08.
  21. "Improving scene understanding through panoptic segmentation". ai.facebook.com. Retrieved 2022-05-08.
  22. Shah, Meet; Chen, Xinlei; Rohrbach, Marcus; Parikh, Devi (2019-02-14). "Cycle-Consistency for Robust Visual Question Answering". arXiv: 1902.05660 [cs.CV].
  23. "Meta AI Research Topic - Natural Language Processing". ai.facebook.com. Retrieved 2022-05-08.
  24. Lample, Guillaume; Ott, Myle; Conneau, Alexis; Denoyer, Ludovic; Ranzato, Marc'Aurelio (2018-08-13). "Phrase-Based & Neural Unsupervised Machine Translation". arXiv: 1804.07755 [cs.CL].
  25. Conneau, Alexis; Lample, Guillaume; Rinott, Ruty; Williams, Adina; Bowman, Samuel R.; Schwenk, Holger; Stoyanov, Veselin (2018-09-13). "XNLI: Evaluating Cross-lingual Sentence Representations". arXiv: 1809.05053 [cs.CL].
  26. See, Abigail; Roller, Stephen; Kiela, Douwe; Weston, Jason (2019-04-10). "What makes a good conversation? How controllable attributes affect human judgments". arXiv: 1902.08654 [cs.CL].
  27. Shuster, Kurt; Humeau, Samuel; Hu, Hexiang; Bordes, Antoine; Weston, Jason (2019-03-20). "Engaging Image Captioning Via Personality". arXiv: 1810.10665 [cs.CV].
  28. Fan, Angela; Lewis, Mike; Dauphin, Yann (2018-05-13). "Hierarchical Neural Story Generation". arXiv: 1805.04833 [cs.CL].
  29. "Open-sourcing PyText for faster NLP development". Engineering at Meta. 2018-12-14. Retrieved 2022-05-08.
  30. "Introducing LLaMA: A foundational, 65-billion-parameter language model". ai.facebook.com. Retrieved 2023-02-26.
  31. "Meta AI Research Topic - Ranking & Recommendations". ai.facebook.com. Retrieved 2022-05-08.
  32. "Open-sourcing ReAgent, a modular, end-to-end platform for building reasoning systems". ai.facebook.com. Retrieved 2022-05-08.
  33. "Meta AI Research Topic - Systems Research". ai.facebook.com. Retrieved 2022-05-08.
  34. "Meta AI Research Topic - Theory". ai.facebook.com. Retrieved 2022-05-08.
  35. "MTIA v1: Meta's first-generation AI inference accelerator". ai.facebook.com. Retrieved 2023-06-07.
  36. "Meta Training Inference Accelerator (MTIA) Explained". encord.com. Retrieved 2023-06-07.
  37. Peters, Jay (2023-05-19). "Meta is working on a new chip for AI". The Verge. Retrieved 2023-06-07.
  38. UBB, Ajit (May 2, 2024). "How to Turn OFF Meta AI Facebook". UBB.