Martijn Koster

Last updated
Martijn Koster
Born1970 (age 5152)
Nationality Dutch
Occupation software engineer
Known for Aliweb, robots.txt

Martijn Koster (born ca 1970) is a Dutch software engineer noted for his pioneering work on Internet searching.

Koster created ALIWEB, the Internet's first search engine, which was announced in November 1993 [1] while working at Nexor and presented in May 1994 [2] at the First International Conference on the World Wide Web. Koster also developed ArchiePlex, [3] a search engine for FTP sites that pre-dates the Web, and CUSI, [4] a simple tool that allowed you to search different search engines in quick succession, useful in the early days of search when services provided varying results.

Koster also created the Robots Exclusion Standard. [5]

Related Research Articles

<span class="mw-page-title-main">Web crawler</span> Software which systematically browses the World Wide Web

A Web crawler, sometimes called a spider or spiderbot and often shortened to crawler, is an Internet bot that systematically browses the World Wide Web and that is typically operated by search engines for the purpose of Web indexing.

Spamdexing is the deliberate manipulation of search engine indexes. It involves a number of methods, such as link building and repeating unrelated phrases, to manipulate the relevance or prominence of resources indexed, in a manner inconsistent with the purpose of the indexing system.

Archie is a tool for indexing FTP archives, allowing users to more easily identify specific files. It is considered the first Internet search engine. The original implementation was written in 1990 by Alan Emtage, then a postgraduate student at McGill University in Montreal, Canada.

The robots exclusion standard, also known as the robots exclusion protocol or simply robots.txt, is a standard used by websites to communicate with web crawlers and other web robots.

<span class="mw-page-title-main">Search engine optimization</span> Practice of increasing online visibility in search engine results pages

Search engine optimization (SEO) is the process of improving the quality and quantity of website traffic to a website or a web page from search engines. SEO targets unpaid traffic rather than direct traffic or paid traffic. Unpaid traffic may originate from different kinds of searches, including image search, video search, academic search, news search, and industry-specific vertical search engines.

In the context of the World Wide Web, deep linking is the use of a hyperlink that links to a specific, generally searchable or indexed, piece of web content on a website, rather than the website's home page. The URL contains all the information needed to point to a particular item. Deep linking is different from mobile deep linking, which refers to directly linking to in-app content using a non-HTTP URI.

The deep web, invisible web, or hidden web are parts of the World Wide Web whose contents are not indexed by standard web search-engines. This is in contrast to the "surface web", which is accessible to anyone using the Internet. Computer-scientist Michael K. Bergman is credited with coining the term in 2001 as a search-indexing term.

ALIWEB is considered the first Web search engine, as its predecessors were either built with different purposes or were only indexers.

<span class="mw-page-title-main">Alan Emtage</span> Bajan computer scientist

Alan Emtage is a Bajan-Canadian computer scientist who conceived and implemented the first version of Archie, a pre-Web Internet search engine for locating material in public FTP archives. It is widely considered the world's first Internet search engine.

The World Wide Web Worm (WWWW) was one of the earliest search engines for the World Wide Web (WWW). It is claimed by some to be the first search engine, though it was not released until March 1994, by which time a number of other search engines had been made publicly available. It was developed in September 1993 by Oliver McBryan at the University of Colorado as a research project.

<span class="mw-page-title-main">Search engine</span> Software system that is designed to search for information on the World Wide Web

A search engine is a software system designed to carry out web searches. They search the World Wide Web in a systematic way for particular information specified in a textual web search query. The search results are generally presented in a line of results, often referred to as search engine results pages (SERPs). The information may be a mix of links to web pages, images, videos, infographics, articles, research papers, and other types of files. Some search engines also mine data available in databases or open directories. Unlike web directories and social bookmarking sites, which are maintained by human editors, search engines also maintain real-time information by running an algorithm on a web crawler. Any internet-based content that can't be indexed and searched by a web search engine falls under the category of deep web.

<span class="mw-page-title-main">History of the World Wide Web</span> Information system running in the Internet

The World Wide Web is a global information medium which users can access via computers connected to the Internet. The term is often mistakenly used as a synonym for the Internet, but the Web is a service that operates over the Internet, just as email and Usenet do. The history of the Internet and the history of hypertext date back significantly farther than that of the World Wide Web.

A search engine is an information retrieval software program that discovers, crawls, transforms and stores information for retrieval and presentation in response to user queries.

Search engine indexing is the collecting, parsing, and storing of data to facilitate fast and accurate information retrieval. Index design incorporates interdisciplinary concepts from linguistics, cognitive psychology, mathematics, informatics, and computer science. An alternate name for the process, in the context of search engines designed to find web pages on the Internet, is web indexing.

<span class="mw-page-title-main">JumpStation</span>

JumpStation was the first WWW search engine that behaved, and appeared to the user, the way current web search engines do. It started indexing on 12 December 1993 and was announced on the Mosaic "What's New" webpage on 21 December 1993. It was hosted at the University of Stirling in Scotland.

W3 Catalog was an early web search engine, first released on September 2, 1993 by developer Oscar Nierstrasz at the University of Geneva.

<span class="mw-page-title-main">First International Conference on the World-Wide Web</span>

The First International Conference on the World-Wide Web was the first-ever conference about the World Wide Web, and the first meeting of what became the International World Wide Web Conference. It was held on May 25 to 27, 1994 in Geneva, Switzerland. The conference had 380 participants, who were accepted out of 800 applicants. It has been referred to as the "Woodstock of the Web".

tkWWW

tkWWW is an early, now discontinued web browser and WYSIWYG HTML editor written by Joseph Wang at MIT as part of Project Athena and the Globewide Network Academy project. The browser was based on the Tcl language and the Tk (toolkit) extension but did not achieve broad user-acceptance or market share, although it was included in many Linux distributions by default. Joseph Wang wanted tkWWW to become a replacement for r r n and to become a "swiss army knife" of networked computing.

<span class="mw-page-title-main">Nexor</span>

Nexor Limited is a privately held company based in Nottingham, providing product and services to safeguard government, defence and critical national infrastructure computer systems. It was originally known as X-Tel Services Limited.

References

  1. Martijn Koster (30 November 1993). "ANNOUNCEMENT: ALIWEB (Archie-Like Indexing for the WEB)". comp.infosystems (plaintext version).{{cite web}}: External link in |work= (help)
  2. "List of PostScript files for the WWW94 advance proceedings". First International Conference on the World-Wide Web. June 1994. Title: "Aliweb - Archie-Like Indexing in the Web." Author: Martijn Koster. Institute: NEXOR Ltd., UK. PostScript, Size: 213616, Printed: 10 pages
  3. "ArchiePlex" . Retrieved 4 January 2013.
  4. "CUSI" . Retrieved 4 January 2013.
  5. Martijn Koster. "Robots Exclusion". robotstxt.org.