Real-time web

Last updated

The real-time web is a network web using technologies and practices that enable users to receive information as soon as it is published by its authors, rather than requiring that they or their software check a source periodically for updates.

Contents

Difference from real-time computing

The real-time web is different from real-time computing in that there is no knowing when, or if, a response will be received. The information types transmitted this way are often short messages, status updates, news alerts, or links to longer documents. The content is often "soft" in that it is based on the social web—people's opinions, attitudes, thoughts, and interests—as opposed to hard news or facts.

History

Examples of real-time web are Facebook's newsfeed, and Twitter, implemented in social networking, search, and news sites. Benefits are said to include increased user engagement ("flow") and decreased server loads. In December 2009 real-time search facilities were added to Google Search. [1]

The absolutely first realtime web implementation worldwide have been the WIMS true-realtime server and its web apps in 2001-2011 (WIMS = Web Interactive Management System); based on the True-RealTime Web (WEB-r) model of above; built in WIMS++ (server built in Java) (serverside) and Adobe Flash (ex Macromedia Flash) (clientside). The true-realtime web model was born in 2000 at mc2labs.net by an Italian independent researcher.

A problem created by the rapid pace and huge volume of information created by real-time web technologies and practices is finding relevant information. One approach, known as real-time search, is the concept of searching for and finding information online as it is produced. Advancements in web search technology coupled with growing use of social media enable online activities to be queried as they occur. A traditional web search crawls and indexes web pages periodically, returning results based on relevance to the search query. Google Real-Time Search was available in Google Search until July 2011.[ citation needed ]

See also

Related Research Articles

<span class="mw-page-title-main">Google Search</span> Search engine from Google

Google Search is a search engine provided by Google. Handling more than 3.5 billion searches per day, it has a 92% share of the global search engine market. It is also the most-visited website in the world.

A search engine is an information retrieval system designed to help find information stored on a computer system. The search results are usually presented in a list and are commonly called hits. Search engines help to minimize the time required to find information and the amount of information which must be consulted, akin to other techniques for managing information overload.

<span class="mw-page-title-main">World Wide Web</span> System of interlinked hypertext documents accessed over the Internet

The World Wide Web (WWW), commonly known as the Web, is an information system enabling documents and other web resources to be accessed over the Internet.

<span class="mw-page-title-main">Website</span> Set of related web pages served from a single web domain

A website is a collection of web pages and related content that is identified by a common domain name and published on at least one web server. Examples of notable websites are Google, Facebook, Amazon, and Wikipedia.

<span class="mw-page-title-main">Cross-site scripting</span> Computer security vulnerability

Cross-site scripting (XSS) is a type of security vulnerability that can be found in some web applications. XSS attacks enable attackers to inject client-side scripts into web pages viewed by other users. A cross-site scripting vulnerability may be used by attackers to bypass access controls such as the same-origin policy. Cross-site scripting carried out on websites accounted for roughly 84% of all security vulnerabilities documented by Symantec up until 2007. XSS effects vary in range from petty nuisance to significant security risk, depending on the sensitivity of the data handled by the vulnerable site and the nature of any security mitigation implemented by the site's owner network.

Internet privacy involves the right or mandate of personal privacy concerning the storing, repurposing, provision to third parties, and displaying of information pertaining to oneself via Internet. Internet privacy is a subset of data privacy. Privacy concerns have been articulated from the beginnings of large-scale computer sharing.

<span class="mw-page-title-main">Text Retrieval Conference</span> Meetings for information retrieval research

The Text REtrieval Conference (TREC) is an ongoing series of workshops focusing on a list of different information retrieval (IR) research areas, or tracks. It is co-sponsored by the National Institute of Standards and Technology (NIST) and the Intelligence Advanced Research Projects Activity, and began in 1992 as part of the TIPSTER Text program. Its purpose is to support and encourage research within the information retrieval community by providing the infrastructure necessary for large-scale evaluation of text retrieval methodologies and to increase the speed of lab-to-product transfer of technology.

Push technology or server push is a style of Internet-based communication where the request for a given transaction is initiated by the publisher or central server. It is contrasted with pull/get, where the request for the transmission of information is initiated by the receiver or client.

Federated search retrieves information from a variety of sources via a search application built on top of one or more search engines. A user makes a single query request which is distributed to the search engines, databases or other query engines participating in the federation. The federated search then aggregates the results that are received from the search engines for presentation to the user. Federated search can be used to integrate disparate information resources within a single large organization ("enterprise") or for the entire web.

<span class="mw-page-title-main">Search engine</span> Software system that is designed to search for information on the World Wide Web

A search engine is a software system designed to carry out web searches. They search the World Wide Web in a systematic way for particular information specified in a textual web search query. The search results are generally presented in a line of results, often referred to as search engine results pages (SERPs). The information may be a mix of links to web pages, images, videos, infographics, articles, research papers, and other types of files. Some search engines also mine data available in databases or open directories. Unlike web directories and social bookmarking sites, which are maintained by human editors, search engines also maintain real-time information by running an algorithm on a web crawler. Any internet-based content that can't be indexed and searched by a web search engine falls under the category of deep web.

Search Engine Results Pages (SERP) are the pages displayed by search engines in response to a query by a user. The main component of the SERP is the listing of results that are returned by the search engine in response to a keyword query.

<span class="mw-page-title-main">HTTP cookie</span> Small pieces of data stored by a web browser while on a website

HTTP cookies are small blocks of data created by a web server while a user is browsing a website and placed on the user's computer or other device by the user's web browser. Cookies are placed on the device used to access a website, and more than one cookie may be placed on a user's device during a session.

AppleSearch was a client/server search engine from Apple Computer, first released for the classic Mac OS in 1994.

Social search is a behavior of retrieving and searching on a social searching engine that mainly searches user-generated content such as news, videos and images related search queries on social media like Facebook, LinkedIn, Twitter, Instagram and Flickr. It is an enhanced version of web search that combines traditional algorithms. The idea behind social search is that instead of ranking search results purely based on semantic relevance between a query and the results, a social search system also takes into account social relationships between the results and the searcher. The social relationships could be in various forms. For example, in LinkedIn people search engine, the social relationships include social connections between searcher and each result, whether or not they are in the same industries, work for the same companies, belong the same social groups, and go the same schools, etc.

<span class="mw-page-title-main">Web mapping</span> Process of using the maps delivered by geographic information systems (GIS) in World Wide Web

Web mapping or an online mapping is the process of using maps, usually created through geographic information systems (GIS), on the Internet, more specifically in the World Wide Web (WWW). A web map or an online map is both served and consumed, thus web mapping is more than just web cartography, it is a service by which consumers may choose what the map will show. Web GIS emphasizes geodata processing aspects more involved with design aspects such as data acquisition and server software architecture such as data storage and algorithms, than it does the end-user reports themselves.

TigerLogic Corporation was an American internet and software development company that designed, developed, sold and supported software infrastructure products. This software was categorized into the following product lines: Yolink search enhancement technology, XML Data Management Server (XDMS), Multidimensional Data Management System (MDMS) and Rapid Application Development (RAD) software tools. TigerLogic was dissolved in 2016, with its MultiValue database products sold to Rocket Software, and its Omnis products sold to UK-based OLS Holdings Ltd.

Collaborative search engines (CSE) are Web search engines and enterprise searches within company intranets that let users combine their efforts in information retrieval (IR) activities, share information resources collaboratively using knowledge tags, and allow experts to guide less experienced people through their searches. Collaboration partners do so by providing query terms, collective tagging, adding comments or opinions, rating search results, and links clicked of former (successful) IR activities to users having the same or a related information need.

Personalized search refers to web search experiences that are tailored specifically to an individual's interests by incorporating information about the individual beyond the specific query provided. There are two general approaches to personalizing search results, involving modifying the user's query and re-ranking search results.

Yandex Search is a search engine. It is owned by Yandex, based in Russia. In January 2015, Yandex Search generated 51.2% of all of the search traffic in Russia according to LiveInternet.

Internet censorship circumvention is the use of various methods and tools to bypass internet censorship.

References

  1. "Relevance meets the real-time web".