Part of a series on |
Machine learning and data mining |
---|
Learning to rank [1] or machine-learned ranking (MLR) is the application of machine learning, typically supervised, semi-supervised or reinforcement learning, in the construction of ranking models for information retrieval systems. [2] Training data may, for example, consist of lists of items with some partial order specified between items in each list. This order is typically induced by giving a numerical or ordinal score or a binary judgment (e.g. "relevant" or "not relevant") for each item. The goal of constructing the ranking model is to rank new, unseen lists in a similar way to rankings in the training data.
Ranking is a central part of many information retrieval problems, such as document retrieval, collaborative filtering, sentiment analysis, and online advertising.
A possible architecture of a machine-learned search engine is shown in the accompanying figure.
Training data consists of queries and documents matching them together with the relevance degree of each match. It may be prepared manually by human assessors (or raters, as Google calls them), who check results for some queries and determine relevance of each result. It is not feasible to check the relevance of all documents, and so typically a technique called pooling is used — only the top few documents, retrieved by some existing ranking models are checked. This technique may introduce selection bias. Alternatively, training data may be derived automatically by analyzing clickthrough logs (i.e. search results which got clicks from users), [3] query chains, [4] or such search engines' features as Google's (since-replaced) SearchWiki. Clickthrough logs can be biased by the tendency of users to click on the top search results on the assumption that they are already well-ranked.
Training data is used by a learning algorithm to produce a ranking model which computes the relevance of documents for actual queries.
Typically, users expect a search query to complete in a short time (such as a few hundred milliseconds for web search), which makes it impossible to evaluate a complex ranking model on each document in the corpus, and so a two-phase scheme is used. [5] First, a small number of potentially relevant documents are identified using simpler retrieval models which permit fast query evaluation, such as the vector space model, Boolean model, weighted AND, [6] or BM25. This phase is called top- document retrieval and many heuristics were proposed in the literature to accelerate it, such as using a document's static quality score and tiered indexes. [7] In the second phase, a more accurate but computationally expensive machine-learned model is used to re-rank these documents.
Learning to rank algorithms have been applied in areas other than information retrieval:
For the convenience of MLR algorithms, query-document pairs are usually represented by numerical vectors, which are called feature vectors . Such an approach is sometimes called bag of features and is analogous to the bag of words model and vector space model used in information retrieval for representation of documents.
Components of such vectors are called features , factors or ranking signals. They may be divided into three groups (features from document retrieval are shown as examples):
Some examples of features, which were used in the well-known LETOR dataset:
Selecting and designing good features is an important area in machine learning, which is called feature engineering.
There are several measures (metrics) which are commonly used to judge how well an algorithm is doing on training data and to compare the performance of different MLR algorithms. Often a learning-to-rank problem is reformulated as an optimization problem with respect to one of these metrics.
Examples of ranking quality measures:
DCG and its normalized variant NDCG are usually preferred in academic research when multiple levels of relevance are used. [11] Other metrics such as MAP, MRR and precision, are defined only for binary judgments.
Recently, there have been proposed several new evaluation metrics which claim to model user's satisfaction with search results better than the DCG metric:
Both of these metrics are based on the assumption that the user is more likely to stop looking at search results after examining a more relevant document, than after a less relevant document.
This section needs expansion. You can help by adding to it. (December 2009) |
Tie-Yan Liu of Microsoft Research Asia has analyzed existing algorithms for learning to rank problems in his book Learning to Rank for Information Retrieval. [1] He categorized them into three groups by their input spaces, output spaces, hypothesis spaces (the core function of the model) and loss functions: the pointwise, pairwise, and listwise approach. In practice, listwise approaches often outperform pairwise approaches and pointwise approaches. This statement was further supported by a large scale experiment on the performance of different learning-to-rank methods on a large collection of benchmark data sets. [14]
In this section, without further notice, denotes an object to be evaluated, for example, a document or an image, denotes a single-value hypothesis, denotes a bi-variate or multi-variate function and denotes the loss function.
In this case, it is assumed that each query-document pair in the training data has a numerical or ordinal score. Then the learning-to-rank problem can be approximated by a regression problem — given a single query-document pair, predict its score. Formally speaking, the pointwise approach aims at learning a function predicting the real-value or ordinal score of a document using the loss function .
A number of existing supervised machine learning algorithms can be readily used for this purpose. Ordinal regression and classification algorithms can also be used in pointwise approach when they are used to predict the score of a single query-document pair, and it takes a small, finite number of values.
In this case, the learning-to-rank problem is approximated by a classification problem — learning a binary classifier that can tell which document is better in a given pair of documents. The classifier shall take two documents as its input and the goal is to minimize a loss function . The loss function typically reflects the number and magnitude of inversions in the induced ranking.
In many cases, the binary classifier is implemented with a scoring function . As an example, RankNet [15] adapts a probability model and defines as the estimated probability of the document has higher quality than :
where is a cumulative distribution function, for example, the standard logistic CDF, i.e.
These algorithms try to directly optimize the value of one of the above evaluation measures, averaged over all queries in the training data. This is often difficult in practice because most evaluation measures are not continuous functions with respect to ranking model's parameters, and so continuous approximations or bounds on evaluation measures have to be used. For example the SoftRank algorithm. [16] LambdaMART is a pairwise algorithm which has been empirically shown to approximate listwise objective functions. [17]
A partial list of published learning-to-rank algorithms is shown below with years of first publication of each method:
Year | Name | Type | Notes |
---|---|---|---|
1989 | OPRF [18] | pointwise | Polynomial regression (instead of machine learning, this work refers to pattern recognition, but the idea is the same). |
1992 | SLR [19] | pointwise | Staged logistic regression. |
1994 | NMOpt [20] | listwise | Non-Metric Optimization. |
1999 | MART (Multiple Additive Regression Trees) [21] | pairwise | |
2000 | Ranking SVM (RankSVM) | pairwise | A more recent exposition is in, [3] which describes an application to ranking using clickthrough logs. |
2001 | Pranking | pointwise | Ordinal regression. |
2003 | RankBoost | pairwise | |
2005 | RankNet | pairwise | |
2006 | IR-SVM [22] | pairwise | Ranking SVM with query-level normalization in the loss function. |
2006 | LambdaRank | pairwise/listwise | RankNet in which pairwise loss function is multiplied by the change in the IR metric caused by a swap. |
2007 | AdaRank [23] | listwise | |
2007 | FRank | pairwise | Based on RankNet, uses a different loss function - fidelity loss. |
2007 | GBRank | pairwise | |
2007 | ListNet | listwise | |
2007 | McRank | pointwise | |
2007 | QBRank | pairwise | |
2007 | RankCosine [24] | listwise | |
2007 | RankGP [25] | listwise | |
2007 | RankRLS | pairwise | Regularized least-squares based ranking. The work is extended in [26] to learning to rank from general preference graphs. |
2007 | SVMmap | listwise | |
2008 | LambdaSMART/LambdaMART | pairwise/listwise | Winning entry in the Yahoo Learning to Rank competition in 2010, using an ensemble of LambdaMART models. Based on MART (1999) [27] “LambdaSMART”, for Lambda-submodel-MART, or LambdaMART for the case with no submodel. |
2008 | ListMLE [28] | listwise | Based on ListNet. |
2008 | PermuRank [29] | listwise | |
2008 | SoftRank [30] | listwise | |
2008 | Ranking Refinement [31] | pairwise | A semi-supervised approach to learning to rank that uses Boosting. |
2008 | SSRankBoost [32] | pairwise | An extension of RankBoost to learn with partially labeled data (semi-supervised learning to rank). |
2008 | SortNet [33] | pairwise | SortNet, an adaptive ranking algorithm which orders objects using a neural network as a comparator. |
2009 | MPBoost [34] | pairwise | Magnitude-preserving variant of RankBoost. The idea is that the more unequal are labels of a pair of documents, the harder should the algorithm try to rank them. |
2009 | BoltzRank | listwise | Unlike earlier methods, BoltzRank produces a ranking model that looks during query time not just at a single document, but also at pairs of documents. |
2009 | BayesRank | listwise | A method combines Plackett-Luce Model and neural network to minimize the expected Bayes risk, related to NDCG, from the decision-making aspect. |
2010 | NDCG Boost [35] | listwise | A boosting approach to optimize NDCG. |
2010 | GBlend | pairwise | Extends GBRank to the learning-to-blend problem of jointly solving multiple learning-to-rank problems with some shared features. |
2010 | IntervalRank | pairwise & listwise | |
2010 | CRR [36] | pointwise & pairwise | Combined Regression and Ranking. Uses stochastic gradient descent to optimize a linear combination of a pointwise quadratic loss and a pairwise hinge loss from Ranking SVM. |
2014 | LCR [37] | pairwise | Applied local low-rank assumption on collaborative ranking. Received best student paper award at WWW'14. |
2015 | FaceNet | pairwise | Ranks face images with the triplet metric via deep convolutional network. |
2016 | XGBoost | pairwise | Supports various ranking objectives and evaluation metrics. |
2017 | ES-Rank [38] | listwise | Evolutionary Strategy Learning to Rank technique with 7 fitness evaluation metrics. |
2018 | DLCM [39] | listwise | A multi-variate ranking function that encodes multiple items from an initial ranked list (local context) with a recurrent neural network and create result ranking accordingly. |
2018 | PolyRank [40] | pairwise | Learns simultaneously the ranking and the underlying generative model from pairwise comparisons. |
2018 | FATE-Net/FETA-Net [41] | listwise | End-to-end trainable architectures, which explicitly take all items into account to model context effects. |
2019 | FastAP [42] | listwise | Optimizes Average Precision to learn deep embeddings. |
2019 | Mulberry | listwise & hybrid | Learns ranking policies maximizing multiple metrics across the entire dataset. |
2019 | DirectRanker | pairwise | Generalisation of the RankNet architecture. |
2019 | GSF [43] | listwise | A permutation-invariant multi-variate ranking function that encodes and ranks items with groupwise scoring functions built with deep neural networks. |
2020 | RaMBO [44] | listwise | Optimizes rank-based metrics using blackbox backpropagation. [45] |
2020 | PRM [46] | pairwise | Transformer network encoding both the dependencies among items and the interactions between the user and items. |
2020 | SetRank [47] | listwise | A permutation-invariant multi-variate ranking function that encodes and ranks items with self-attention networks. |
2021 | PiRank [48] | listwise | Differentiable surrogates for ranking able to exactly recover the desired metrics and scales favourably to large list sizes, significantly improving internet-scale benchmarks. |
2022 | SAS-Rank | listwise | Combining Simulated Annealing with Evolutionary Strategy for implicit and explicit learning to rank from relevance labels. |
2022 | VNS-Rank | listwise | Variable Neighborhood Search in 2 Novel Methodologies in AI for Learning to Rank. |
2022 | VNA-Rank | listwise | Combining Simulated Annealing with Variable Neighbourhood Search for Learning to Rank. |
2023 | GVN-Rank | listwise | Combining Gradient Ascent with Variable Neighbourhood Search for Learning to Rank. |
Note: as most supervised learning-to-rank algorithms can be applied to pointwise, pairwise and listwise case, only those methods which are specifically designed with ranking in mind are shown above.
Norbert Fuhr introduced the general idea of MLR in 1992, describing learning approaches in information retrieval as a generalization of parameter estimation; [49] a specific variant of this approach (using polynomial regression) had been published by him three years earlier. [18] Bill Cooper proposed logistic regression for the same purpose in 1992 [19] and used it with his Berkeley research group to train a successful ranking function for TREC. Manning et al. [50] suggest that these early works achieved limited results in their time due to little available training data and poor machine learning techniques.
Several conferences, such as NeurIPS, SIGIR and ICML have had workshops devoted to the learning-to-rank problem since the mid-2000s (decade).
Commercial web search engines began using machine-learned ranking systems since the 2000s (decade). One of the first search engines to start using it was AltaVista (later its technology was acquired by Overture, and then Yahoo), which launched a gradient boosting-trained ranking function in April 2003. [51] [52]
Bing's search is said to be powered by RankNet algorithm, [53] [ when? ] which was invented at Microsoft Research in 2005.
In November 2009 a Russian search engine Yandex announced [54] that it had significantly increased its search quality due to deployment of a new proprietary MatrixNet algorithm, a variant of gradient boosting method which uses oblivious decision trees. [55] Recently they have also sponsored a machine-learned ranking competition "Internet Mathematics 2009" [56] based on their own search engine's production data. Yahoo has announced a similar competition in 2010. [57]
As of 2008, Google's Peter Norvig denied that their search engine exclusively relies on machine-learned ranking. [58] Cuil's CEO, Tom Costello, suggests that they prefer hand-built models because they can outperform machine-learned models when measured against metrics like click-through rate or time on landing page, which is because machine-learned models "learn what people say they like, not what people actually like". [59]
In January 2017, the technology was included in the open source search engine Apache Solr. [60] It is also available in the open source OpenSearch and the source-available Elasticsearch. [61] [62] These implementations make learning to rank widely accessible for enterprise search.
Similar to recognition applications in computer vision, recent neural network based ranking algorithms are also found to be susceptible to covert adversarial attacks, both on the candidates and the queries. [63] With small perturbations imperceptible to human beings, ranking order could be arbitrarily altered. In addition, model-agnostic transferable adversarial examples are found to be possible, which enables black-box adversarial attacks on deep ranking systems without requiring access to their underlying implementations. [63] [64]
Conversely, the robustness of such ranking systems can be improved via adversarial defenses such as the Madry defense. [65]
Information retrieval (IR) in computing and information science is the task of identifying and retrieving information system resources that are relevant to an information need. The information need can be specified in the form of a search query. In the case of document retrieval, queries can be based on full-text or other content-based indexing. Information retrieval is the science of searching for information in a document, searching for documents themselves, and also searching for the metadata that describes data, and for databases of texts, images or sounds.
A recommender system, or a recommendation system, is a subclass of information filtering system that provides suggestions for items that are most pertinent to a particular user. Recommender systems are particularly useful when an individual needs to choose an item from a potentially overwhelming number of items that a service may offer.
Latent semantic analysis (LSA) is a technique in natural language processing, in particular distributional semantics, of analyzing relationships between a set of documents and the terms they contain by producing a set of concepts related to the documents and terms. LSA assumes that words that are close in meaning will occur in similar pieces of text. A matrix containing word counts per document is constructed from a large piece of text and a mathematical technique called singular value decomposition (SVD) is used to reduce the number of rows while preserving the similarity structure among columns. Documents are then compared by cosine similarity between any two columns. Values close to 1 represent very similar documents while values close to 0 represent very dissimilar documents.
Automatic image annotation is the process by which a computer system automatically assigns metadata in the form of captioning or keywords to a digital image. This application of computer vision techniques is used in image retrieval systems to organize and locate images of interest from a database.
In computer science, an inverted index is a database index storing a mapping from content, such as words or numbers, to its locations in a table, or in a document or a set of documents. The purpose of an inverted index is to allow fast full-text searches, at a cost of increased processing when a document is added to the database. The inverted file may be the database file itself, rather than its index. It is the most popular data structure used in document retrieval systems, used on a large scale for example in search engines. Additionally, several significant general-purpose mainframe-based database management systems have used inverted list architectures, including ADABAS, DATACOM/DB, and Model 204.
Relevance feedback is a feature of some information retrieval systems. The idea behind relevance feedback is to take the results that are initially returned from a given query, to gather user feedback, and to use information about whether or not those results are relevant to perform a new query. We can usefully distinguish between three types of feedback: explicit feedback, implicit feedback, and blind or "pseudo" feedback.
Nearest neighbor search (NNS), as a form of proximity search, is the optimization problem of finding the point in a given set that is closest to a given point. Closeness is typically expressed in terms of a dissimilarity function: the less similar the objects, the larger the function values.
Query expansion (QE) is the process of reformulating a given query to improve retrieval performance in information retrieval operations, particularly in the context of query understanding. In the context of search engines, query expansion involves evaluating a user's input and expanding the search query to match additional documents. Query expansion involves techniques such as:
Plagiarism detection or content similarity detection is the process of locating instances of plagiarism or copyright infringement within a work or document. The widespread use of computers and the advent of the Internet have made it easier to plagiarize the work of others.
In computer science, locality-sensitive hashing (LSH) is a fuzzy hashing technique that hashes similar input items into the same "buckets" with high probability. Since similar items end up in the same buckets, this technique can be used for data clustering and nearest neighbor search. It differs from conventional hashing techniques in that hash collisions are maximized, not minimized. Alternatively, the technique can be seen as a way to reduce the dimensionality of high-dimensional data; high-dimensional input items can be reduced to low-dimensional versions while preserving relative distances between items.
Discounted cumulative gain (DCG) is a measure of ranking quality in information retrieval. It is often normalized so that it is comparable across queries, giving Normalized DCG (nDCG or NDCG). NDCG is often used to measure effectiveness of search engine algorithms and related applications. Using a graded relevance scale of documents in a search-engine result set, DCG sums the usefulness, or gain, of the results discounted by their position in the result list. NDCG is DCG normalized by the maximum possible DCG of the result set when ranked from highest to lowest gain, thus adjusting for the different numbers of relevant results for different queries.
XML retrieval, or XML information retrieval, is the content-based retrieval of documents structured with XML. As such it is used for computing relevance of XML documents.
Collaborative search engines (CSE) are Web search engines and enterprise searches within company intranets that let users combine their efforts in information retrieval (IR) activities, share information resources collaboratively using knowledge tags, and allow experts to guide less experienced people through their searches. Collaboration partners do so by providing query terms, collective tagging, adding comments or opinions, rating search results, and links clicked of former (successful) IR activities to users having the same or a related information need.
The Generalized vector space model is a generalization of the vector space model used in information retrieval. Wong et al. presented an analysis of the problems that the pairwise orthogonality assumption of the vector space model (VSM) creates. From here they extended the VSM to the generalized vector space model (GVSM).
The query likelihood model is a language model used in information retrieval. A language model is constructed for each document in the collection. It is then possible to rank each document by the probability of specific documents given a query. This is interpreted as being the likelihood of a document being relevant given a query.
In natural language processing, Entity Linking, also referred to as named-entity disambiguation (NED), named-entity recognition and disambiguation (NERD) or named-entity normalization (NEN) is the task of assigning a unique identity to entities mentioned in text. For example, given the sentence "Paris is the capital of France", the main idea is to first identify "Paris" and "France" as named entities, and then to determine that "Paris" refers to the city of Paris and not to Paris Hilton or any other entity that could be referred to as "Paris" and "France" to the french country. The Entity Linking task is composed of 3 subtasks. First, Named Entity Recognition, which consist in the extraction of named entities from a text. Second, for each named entity, the objective is to generate candidates from a Knowledge Base. We call this step candidate generation. The main challenge being that we want to get the corresponding entity inside the candidates set. Lastly, the objective is to choose from the candidate set the correct entity. We call this step disambiguation.
Evaluation measures for an information retrieval (IR) system assess how well an index, search engine, or database returns results from a collection of resources that satisfy a user's query. They are therefore fundamental to the success of information systems and digital platforms.
ChengXiang Zhai is a computer scientist. He is a Donald Biggar Willett Professor in Engineering in the Department of Computer Science at the University of Illinois at Urbana-Champaign.
BitFunnel is the search engine indexing algorithm and a set of components used in the Bing search engine, which were made open source in 2016. BitFunnel uses bit-sliced signatures instead of an inverted index in an attempt to reduce operations cost.
Learned sparse retrieval or sparse neural search is an approach to Information Retrieval which uses a sparse vector representation of queries and documents. It borrows techniques both from lexical bag-of-words and vector embedding algorithms, and is claimed to perform better than either alone. The best-known sparse neural search systems are SPLADE and its successor SPLADE v2. Others include DeepCT, uniCOIL, EPIC, DeepImpact, TILDE and TILDEv2, Sparta, SPLADE-max, and DistilSPLADE-max.
{{cite web}}
: CS1 maint: archived copy as title (link){{cite journal}}
: Cite journal requires |journal=
(help){{cite journal}}
: Cite journal requires |journal=
(help){{cite journal}}
: Cite journal requires |journal=
(help)