Retrieval-augmented generation

Last updated

Retrieval-augmented generation (RAG) is a technique that enables generative artificial intelligence (Gen AI) models to retrieve and incorporate new information. [1] It modifies interactions with a large language model (LLM) so that the model responds to user queries with reference to a specified set of documents, using this information to supplement information from its pre-existing training data. [2] This allows LLMs to use domain-specific and/or updated information. [2] [3] Use cases include providing chatbot access to internal company data or generating responses based on authoritative sources. [4]

Contents

RAG improves large language models (LLMs) by incorporating information retrieval before generating responses. Unlike traditional LLMs that rely on static training data, RAG pulls relevant text from databases, uploaded documents, or web sources. [1] According to Ars Technica, "RAG is a way of improving LLM performance, in essence by blending the LLM process with a web search or other document look-up process to help LLMs stick to the facts." This method helps reduce AI hallucinations, which have led to real-world issues like chatbots inventing policies or lawyers citing nonexistent legal cases. By dynamically retrieving information, RAG enables AI to provide more accurate responses without frequent retraining. [5] [1]

RAG and LLM Limitations

In June 2024, Ars Technica reported, "But LLMs aren’t humans, of course. Their training data can age quickly, particularly in more time-sensitive queries. In addition, the LLM often can’t distinguish specific sources of its knowledge, as all its training data is blended together into a kind of soup." In 2023, during its launch demonstration, Google’s Bard provided incorrect information about the James Webb Space Telescope, an error that contributed to a $100 billion decline in Alphabet’s stock value. [5]

Retrieval-Augmented Generation (RAG) is a method that allows large language models (LLMs) to retrieve and incorporate additional information before generating responses. Unlike LLMs that rely solely on pre-existing training data, RAG integrates newly available data at query time. Ars Technica states, "The beauty of RAG is that when new information becomes available, rather than having to retrain the model, all that’s needed is to augment the model’s external knowledge base with the updated information." [5]

The BBC describes "prompt stuffing" as a technique within RAG, in which relevant context is inserted into a prompt to guide the model’s response. This approach provides the LLM with key information early in the prompt, encouraging it to prioritize the supplied data over pre-existing training knowledge. [6]

Process

Retrieval-Augmented Generation (RAG) enhances large language models (LLMs) by incorporating an information-retrieval mechanism that allows models to access and utilize additional data beyond their original training set (indexing). This approach reduces reliance on static datasets, which can quickly become outdated. When a user submits a query, RAG uses a document retriever to search for relevant content from available sources before incorporating the retrieved information into the model’s response (retrieval). Ars Technica notes that "when new information becomes available, rather than having to retrain the model, all that’s needed is to augment the model’s external knowledge base with the updated information" (augmentation). By dynamically integrating relevant data, RAG enables LLMs to generate more informed and contextually grounded responses (generation). [5] [1]

RAG key stages

Indexing

Typically, the data to be referenced is converted into LLM embeddings, numerical representations in the form of a large vector space. RAG can be used on unstructured (usually text), semi-structured, or structured data (for example knowledge graphs). [3] These embeddings are then stored in a vector database to allow for document retrieval.

Overview of RAG process, combining external documents and user input into an LLM prompt to get tailored output RAG diagram.svg
Overview of RAG process, combining external documents and user input into an LLM prompt to get tailored output

Retrieval

Given a user query, a document retriever is first called to select the most relevant documents that will be used to augment the query. [2] This comparison can be done using a variety of methods, which depend in part on the type of indexing used. [1] [3]

Augmentation

The model feeds this relevant retrieved information into the LLM via prompt engineering of the user's original query. [4] Newer implementations (as of 2023) can also incorporate specific augmentation modules with abilities such as expanding queries into multiple domains and using memory and self-improvement to learn from previous retrievals. [3]

Generation

Finally, the LLM can generate output based on both the query and the retrieved documents. [2] [7] Some models incorporate extra steps to improve output, such as the re-ranking of retrieved information, context selection, and fine-tuning. [3]

Improvements

Improvements to the basic process above can be applied at different stages in the RAG flow.

Encoder

These methods center around the encoding of text as either dense or sparse vectors. Sparse vectors, used to encode the identity of a word, are typically dictionary length and contain almost all zeros. Dense vectors, used to encode meaning, are much smaller and contain far fewer zeros. Several enhancements can be made to the way similarities are calculated in the vector stores (databases).

Retriever-centric methods

These methods focus on improving the quality of hits from the vector database:


Language model

Language model in Deepmind's 2021 Retro for RAG.svg
Retro language model for RAG. Each Retro block consists of Attention, Chunked Cross Attention, and Feed Forward layers. Black-lettered boxes show data being changed, and blue lettering shows the algorithm performing the changes.

By redesigning the language model with the retriever in mind, a 25-time smaller network can get comparable perplexity as its much larger counterparts. [14] Because it is trained from scratch, this method (Retro) incurs the high cost of training runs that the original RAG scheme avoided. The hypothesis is that by giving domain knowledge during training, Retro needs less focus on the domain and can devote its smaller weight resources only to language semantics. The redesigned language model is shown here.

It has been reported that Retro is not reproducible, so modifications were made to make it so. The more reproducible version is called Retro++ and includes in-context RAG. [15]

Chunking

Chunking involves various strategies for breaking up the data into vectors so the retriever can find details in it.

Rag-doc-styles.png
Different data styles have patterns that correct chunking can take advantage of.

Three types of chunking strategies are:

Knowledge graphs

Rather than using documents as a source to vectorize and retrieve from, Knowledge Graphs can be used. One can start with a set of documents, books, or other bodies of text, and convert them to a knowledge graph using one of many methods, including language models. Once the knowledge graph is created, subgraphs can be vectorized, stored in a vector database, and used for retrieval as in plain RAG. The advantage here is that graphs has more recognizable structure than strings of text and this structure can help retrieve more relevant facts for generation. Sometimes this approach is called GraphRAG.[ citation needed ]

Sometimes vector database searches can miss key facts needed to answer a user's question. One way to mitigate this is to do a traditional text search, add those results to the text chunks linked to the retrieved vectors from the vector search, and feed the combined hybrid text into the language model for generation. [ citation needed ]

Challenges

If the external data source is large, retrieval can be slow. [16]

RAG is not a complete solution to the problem of hallucinations in LLMs. According to Ars Technica , "It is not a direct solution because the LLM can still hallucinate around the source material in its response." [5]

While RAG improves the accuracy of large language models (LLMs), it does not eliminate all challenges. One limitation is that while RAG reduces the need for frequent model retraining, it does not remove it entirely. Additionally, LLMs may struggle to recognize when they lack sufficient information to provide a reliable response. Without specific training, models may generate answers even when they should indicate uncertainty. According to IBM, this issue can arise when the model lacks the ability to assess its own knowledge limitations. [1]

RAG systems may retrieve factually correct but misleading sources, leading to errors in interpretation. In some cases, an LLM may extract statements from a source without considering its context, resulting in an incorrect conclusion. Additionally, when faced with conflicting information, RAG models may struggle to determine which source is accurate and may combine details from multiple sources, producing responses that merge outdated and updated information in a misleading way. According to the MIT Technology Review, these issues occur because RAG systems may misinterpret the data they retrieve. [2]

References

  1. 1 2 3 4 5 6 "What is retrieval-augmented generation?". IBM. 22 August 2023. Retrieved 7 March 2025.
  2. 1 2 3 4 5 "Why Google's AI Overviews gets things wrong". MIT Technology Review. 31 May 2024. Retrieved 7 March 2025.
  3. 1 2 3 4 5 Gao, Yunfan; Xiong, Yun; Gao, Xinyu; Jia, Kangxiang; Pan, Jinliu; Bi, Yuxi; Dai, Yi; Sun, Jiawei; Wang, Meng; Wang, Haofen (2023). "Retrieval-Augmented Generation for Large Language Models: A Survey". arXiv: 2312.10997 [cs.CL].
  4. 1 2 "What is RAG? - Retrieval-Augmented Generation AI Explained - AWS". Amazon Web Services, Inc. Retrieved 16 July 2024.
  5. 1 2 3 4 5 "Can a technology called RAG keep AI models from making stuff up?". Ars Technica. 6 June 2024. Retrieved 7 March 2025.
  6. "Mitigating LLM hallucinations in text summarisation". BBC. 20 June 2024. Retrieved 7 March 2025.
  7. Lewis, Patrick; Perez, Ethan; Piktus, Aleksandra; Petroni, Fabio; Karpukhin, Vladimir; Goyal, Naman; Küttler, Heinrich; Lewis, Mike; Yih, Wen-tau; Rocktäschel, Tim; Riedel, Sebastian; Kiela, Douwe (2020). "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks". Advances in Neural Information Processing Systems. 33. Curran Associates, Inc.: 9459–9474. arXiv: 2005.11401 .
  8. Khattab, Omar; Zaharia, Matei (2020). ""ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"". doi:10.1145/3397271.3401075.
  9. Formal, Thibault; Lassance, Carlos; Piwowarski, Benjamin; Clinchant, Stéphane (2021). ""SPLADE v2: Sparse Lexical and Expansion Model for Information Retrieval"". arXiv: 2109.10086 [cs.IR].
  10. Lee, Kenton; Chang, Ming-Wei; Toutanova, Kristina (2019). ""Latent Retrieval for Weakly Supervised Open Domain Question Answering"" (PDF).
  11. Lin, Sheng-Chieh; Asai, Akari (2023). ""How to Train Your DRAGON: Diverse Augmentation Towards Generalizable Dense Retrieval"" (PDF).
  12. Shi, Weijia; Min, Sewon (2024). "REPLUG: Retrieval-Augmented Black-Box Language Models". "REPLUG: Retrieval-Augmented Black-Box Language Models". pp. 8371–8384. arXiv: 2301.12652 . doi:10.18653/v1/2024.naacl-long.463.
  13. Ram, Ori; Levine, Yoav; Dalmedigos, Itay; Muhlgay, Dor; Shashua, Amnon; Leyton-Brown, Kevin; Shoham, Yoav (2023). ""In-Context Retrieval-Augmented Language Models"". Transactions of the Association for Computational Linguistics. 11: 1316–1331. arXiv: 2302.00083 . doi:10.1162/tacl_a_00605.
  14. Borgeaud, Sebastian; Mensch, Arthur (2021). ""Improving language models by retrieving from trillions of tokens"" (PDF).
  15. Wang, Boxin; Ping, Wei (2023). ""Shall We Pretrain Autoregressive Language Models with Retrieval? A Comprehensive Study"" (PDF).
  16. Magesh, Varun; Surani, Faiz; Dahl, Matthew; Suzgun, Mirac; Manning, Christopher D.; Ho, Daniel E. (2024-05-30). "Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools". arXiv: 2405.20362 [cs.CL].