Pearl growing is a metaphor taken from the process of small bits of sand growing to make a beautiful pearl, which is used in information literacy. This is also called "snowballing" [1] , alluding to the process of how a snowball can grow into a big snow-man by accumulating snow. In this context this refers to the process of using one information item (like a subject term or citation) to find content that provides more information items. This search strategy is most successfully employed at the beginning of the research process as the searcher uncovers new pearls about his or her topic.
Citation pearl growing is the act of using one relevant source, or citation, to find more relevant sources on a topic. The searcher usually has a document that matches a topic or information need. From this document, the searcher is able to find other keywords, descriptors and themes to use in a subsequent search. [2] Citation Pearl Growing is a popular search and retrieval method used by librarians. [3]
Subject pearl growing is a strategy used in an electronic database that has subject or keyword descriptors. By clicking on one subject, the searcher is able to find other related subjects and subdivisions that may or may not be useful to the search.
Searchers use the pearl growing technique when surfing the Internet. Using the theory that websites that link to each other are similar, a searcher can move from site to site, collecting information. Ramer (2005) suggests pearl growing by using the pearl as a search term in search engines or even in the URL.
In systematic literature reviews, pearl growing is a technique used to ensure all relevant articles are included. Pearl growing involves identifying a primary article that meets the inclusion criteria for the review. From this primary article, the researcher works backwards to find all the articles cited in the bibliography and checks them for eligibility for inclusion in the review. The researcher then works forwards to search for any articles that have cited the primary article. It is estimated that up to 51% of references in a systematic review are identified by pearl growing. [4] There is evidence that using pearl growing for systematic reviews is a more comprehensive approach and more likely to identify all relevant articles compared to online database searches. [5]
Pearl growing, when applied to scientific literature, may also be referred to as citation mining or snowballing.
This page is a glossary of library and information science.
Internet research is the practice of using Internet information, especially free information on the World Wide Web, or Internet-based resources in research.
PubMed is a free search engine accessing primarily the MEDLINE database of references and abstracts on life sciences and biomedical topics. The United States National Library of Medicine (NLM) at the National Institutes of Health maintain the database as part of the Entrez system of information retrieval.
Scopus is Elsevier's abstract and citation database launched in 2004. Scopus covers nearly 36,377 titles from approximately 11,678 publishers, of which 34,346 are peer-reviewed journals in top-level subject fields: life sciences, social sciences, physical sciences and health sciences. It covers three types of sources: book series, journals, and trade journals. All journals covered in the Scopus database are reviewed for sufficiently high quality each year according to four types of numerical quality measure for each title; those are h-Index, CiteScore, SJR and SNIP. Scopus also allows patent searches in a dedicated patent database Lexis-Nexis, albeit with a limited functionality.
PsycINFO is a database of abstracts of literature in the field of psychology. It is produced by the American Psychological Association and distributed on the association's APA PsycNET and through third-party vendors. It is the electronic version of the now-ceased Psychological Abstracts. In 2000, it absorbed PsycLIT which had been published on CD-ROM.
Medical Subject Headings (MeSH) is a comprehensive controlled vocabulary for the purpose of indexing journal articles and books in the life sciences. It serves as a thesaurus that facilitates searching. Created and updated by the United States National Library of Medicine (NLM), it is used by the MEDLINE/PubMed article database and by NLM's catalog of book holdings. MeSH is also used by ClinicalTrials.gov registry to classify which diseases are studied by trials registered in ClinicalTrials.
In text retrieval, full-text search refers to techniques for searching a single computer-stored document or a collection in a full-text database. Full-text search is distinguished from searches based on metadata or on parts of the original texts represented in databases.
A sequence profiling tool in bioinformatics is a type of software that presents information related to a genetic sequence, gene name, or keyword input. Such tools generally take a query such as a DNA, RNA, or protein sequence or ‘keyword’ and search one or more databases for information related to that sequence. Summaries and aggregate results are provided in standardized format describing the information that would otherwise have required visits to many smaller sites or direct literature searches to compile. Many sequence profiling tools are software portals or gateways that simplify the process of finding information about a query in the large and growing number of bioinformatics databases. The access to these kinds of tools is either web based or locally downloadable executables.
Search Engine Results Pages (SERP) are the pages displayed by search engines in response to a query by a user. The main component of the SERP is the listing of results that are returned by the search engine in response to a keyword query.
In information retrieval, an index term is a term that captures the essence of the topic of a document. Index terms make up a controlled vocabulary for use in bibliographic records. They are an integral part of bibliographic control, which is the function by which libraries collect, organize and disseminate documents. They are used as keywords to retrieve documents in an information system, for instance, a catalog or a search engine. A popular form of keywords on the web are tags, which are directly visible and can be assigned by non-experts. Index terms can consist of a word, phrase, or alphanumerical term. They are created by analyzing the document either manually with subject indexing or automatically with automatic indexing or more sophisticated methods of keyword extraction. Index terms can either come from a controlled vocabulary or be freely assigned.
Keyword research is a practice search engine optimization (SEO) professionals used to find and research search terms that users enter into search engines when looking for products, services or general information. Keywords are related to queries which are asked by users in search engines.
Subject indexing is the act of describing or classifying a document by index terms, keywords, or other symbols in order to indicate what different documents are about, to summarize their contents or to increase findability. In other words, it is about identifying and describing the subject of documents. Indexes are constructed, separately, on three distinct levels: terms in a document such as a book; objects in a collection such as a library; and documents within a field of knowledge.
A review article is an article that summarizes the current state of understanding on a topic within a certain discipline. A review article is generally considered a secondary source since it may analyze and discuss the method and conclusions in previously published studies. It resembles a survey article or, in news publishing, overview article, which also surveys and summarizes previously published primary and secondary sources, instead of reporting new facts and results. Survey articles are however considered tertiary sources, since they do not provide additional analysis and synthesis of new conclusions. A review of such sources is often referred to as a tertiary review.
A concept search is an automated information retrieval method that is used to search electronically stored unstructured text for information that is conceptually similar to the information provided in a search query. In other words, the ideas expressed in the information retrieved in response to a concept search query are relevant to the ideas contained in the text of the query.
Projekt Dyabola is a software for creating and browsing bibliographic data and image collections, specifically targeted to the humanities community. The program is built and maintained by the Biering & Brinkmann company of Germany, and access to a web version is available through subscription. The service is available in six languages.
Reverse image search is a content-based image retrieval (CBIR) query technique that involves providing the CBIR system with a sample image that it will then base its search upon; in terms of information retrieval, the sample image is very useful. In particular, reverse image search is characterized by a lack of search terms. This effectively removes the need for a user to guess at keywords or terms that may or may not return a correct result. Reverse image search also allows users to discover content that is related to a specific sample image or the popularity of an image, and to discover manipulated versions and derivative works.
Automatic indexing is the computerized process of scanning large volumes of documents against a controlled vocabulary, taxonomy, thesaurus or ontology and using those controlled terms to quickly and effectively index large electronic document depositories. These keywords or language are applied by training a system on the rules that determine what words to match. There are additional parts to this such as syntax, usage, proximity, and other algorithms based on the system and what is required for indexing. This is taken into account using Boolean statements to gather and capture the indexing information out of the text. As the number of documents exponentially increases with the proliferation of the Internet, automatic indexing will become essential to maintaining the ability to find relevant information in a sea of irrelevant information. Natural language systems are used to train a system based on seven different methods to help with this sea of irrelevant information. These methods are Morphological, Lexical, Syntactic, Numerical, Phraseological, Semantic, and Pragmatic. Each of these look and different parts of speed and terms to build a domain for the specific information that is being covered for indexing. This is used in the automated process of indexing.
A citation graph, in information science and bibliometrics, is a directed graph that describes the citations within a collection of documents.
Semantic Scholar is a research tool for scientific literature powered by artificial intelligence. It is developed at the Allen Institute for AI and was publicly released in November 2015. Semantic Scholar uses modern techniques in natural language processing to support the research process, for example by providing automatically generated summaries of scholarly papers. The Semantic Scholar team is actively researching the use of artificial intelligence in natural language processing, machine learning, human–computer interaction, and information retrieval.
In software engineering, a tertiary review is a systematic review of systematic reviews. It is also referred to as a tertiary study in the software engineering literature. However, Umbrella review is the term more commonly used in medicine.