Resources of a Resource

Last updated

Resources of a Resource (ROR) is an XML format for describing the content of an internet resource or website in a generic fashion so this content can be better understood by search engines, spiders, web applications, etc. The ROR format provides several pre-defined terms for describing objects like sitemaps, products, events, reviews, jobs, classifieds, etc. The format can be extended with custom terms.

RORweb.com is the official website of ROR; the ROR format was created by AddMe.com as a way to help search engines better understand content and meaning. Similar concepts, like Google Sitemaps and Google Base, have also been developed since the introduction of the ROR format.

ROR objects are placed in an ROR feed called ror.xml. This file is typically located in the root directory of the resource or website it describes. When a search engine like Google or Yahoo searches the web to determine how to categorize content, the ROR feed allows the search engines "spider" to quickly identify all the content and attributes of the website.

This has three main benefits:

  1. It allows the spider to correctly categorize the content of the website into its engine.
  2. It allows the spider to extract very detailed information about the objects on a website (sitemaps, products, events, reviews, jobs, classifieds, etc.)
  3. It allows the website owner to optimize his site for inclusion of its content into the search engines.


Related Research Articles

The Semantic Web is an extension of the World Wide Web through standards set by the World Wide Web Consortium (W3C). The goal of the Semantic Web is to make Internet data machine-readable.

The robots exclusion standard, also known as the robots exclusion protocol or simply robots.txt, is a standard used by websites to communicate with web crawlers and other web robots. The standard specifies how to inform the web robot about which areas of the website should not be processed or scanned. Robots are often used by search engines to categorize websites. Not all robots cooperate with the standard; email harvesters, spambots, malware and robots that scan for security vulnerabilities may even start with the portions of the website where they have been told to stay out. The standard can be used in conjunction with Sitemaps, a robot inclusion standard for websites.

Search engine optimization (SEO) is the process of improving the quality and quantity of website traffic to a website or a web page from search engines. SEO targets unpaid traffic rather than direct traffic or paid traffic. Unpaid traffic may originate from different kinds of searches, including image search, video search, academic search, news search, and industry-specific vertical search engines.

Web syndication is a form of syndication in which content is made available from one website to other sites. Most commonly, websites are made available to provide either summaries or full renditions of a website's recently added content. The term may also describe other kinds of content licensing for reuse.

Google AdSense is a program run by Google through which website publishers in the Google Network of content sites serve text, images, video, or interactive media advertisements that are targeted to the site content and audience. These advertisements are administered, sorted, and maintained by Google. They can generate revenue on either a per-click or per-impression basis. Google beta-tested a cost-per-action service, but discontinued it in October 2008 in favor of a DoubleClick offering. In Q1 2014, Google earned US$3.4 billion, or 22% of total revenue, through Google AdSense. AdSense is a participant in the AdChoices program, so AdSense ads typically include the triangle-shaped AdChoices icon. This program also operates on HTTP cookies. In 2021, over 38.3 million websites use AdSense.

The Ror is a caste found mainly in northern India.

Google Alerts is a content change detection and notification service, offered by the search engine company Google. The service sends emails to the user when it finds new results—such as web pages, newspaper articles, blogs, or scientific research—that match the user's search term(s). In 2003, Google launched Google Alerts, which were the result of Naga Kataru's efforts. His name is on the three patents for Google Alerts.

OpenSearch is a collection of technologies that allow publishing of search results in a format suitable for syndication and aggregation. It is a way for websites and search engines to publish search results in a standard and accessible format.

News aggregator Client software that aggregates syndicated web content

In computing, a news aggregator, also termed a feed aggregator, feed reader, news reader, RSS reader or simply an aggregator, is client software or a web application that aggregates syndicated web content such as online newspapers, blogs, podcasts, and video blogs (vlogs) in one location for easy viewing. The updates distributed may include journal tables of contents, podcasts, videos, and news items.

A site map is a list of pages of a web site within a domain.

The Sitemaps protocol allows a webmaster to inform search engines about URLs on a website that are available for crawling. A Sitemap is an XML file that lists the URLs for a site. It allows webmasters to include additional information about each URL: when it was last updated, how often it changes, and how important it is in relation to other URLs of the site. This allows search engines to crawl the site more efficiently and to find URLs that may be isolated from the rest of the site's content. The Sitemaps protocol is a URL inclusion protocol and complements robots.txt, a URL exclusion protocol.

In blogging, a ping is an XML-RPC-based push mechanism by which a weblog notifies a server that its content has been updated. An XML-RPC signal is sent from the weblog to one or more "Ping Server(s)" (as specified by the weblog) to notify a list of their "Services" of new content on the weblog.

Google Base

Google Base was a database provided by Google into which any user can add almost any type of content, such as text, images, and structured information in formats such as XML, PDF, Excel, RTF, or WordPerfect. As of September 2010, the product has been downgraded to Google Merchant Center.

A product feed or product data feed is a file made up of a list of products and attributes of those products organized so that each product can be displayed, advertised or compared in a unique way. A product feed typically contains a product image, title, product identifier, marketing copy, and product attributes.

Search engine Software system that is designed to search for information on the World Wide Web

A search engine is a software system that is designed to carry out web searches, which means to search the World Wide Web in a systematic way for particular information specified in a textual web search query. The search results are generally presented in a line of results, often referred to as search engine results pages (SERPs) The information may be a mix of links to web pages, images, videos, infographics, articles, research papers, and other types of files. Some search engines also mine data available in databases or open directories. Unlike web directories, which are maintained only by human editors, search engines also maintain real-time information by running an algorithm on a web crawler. Internet content that is not capable of being searched by a web search engine is generally described as the deep web.

A search aggregator is a type of metasearch engine which gathers results from multiple search engines simultaneously, typically through RSS search results. It combines user specified search feeds to give the user the same level of control over content as a general aggregator.

A Biositemap is a way for a biomedical research institution of organisation to show how biological information is distributed throughout their Information Technology systems and networks. This information may be shared with other organisations and researchers.

Metadata Data about data

Metadata is "data" that provides information about other data". In other words, it is "data about data". Many distinct types of metadata exist, including descriptive metadata, structural metadata, administrative metadata, reference metadata, statistical metadata. and legal metadata.

PowerMapper is a web crawler that automatically creates a site map of a website using thumbnails of each web page. A number of map styles are available, although the cheaper Standard edition has fewer styles than the Professional edition.

papaya CMS is an open-source content management system, free of charge and complying with open standards like XML as data format, XSLT as templating language, and PHP for programming.