|Owner||University of Toronto|
|Created by||Gunther Eysenbach|
|Current status||Online, read-only|
WebCite is an on-demand archive site, designed to digitally preserve scientific and educationally important material on the web by making snapshots of Internet contents as they existed at the time when a blogger, or a scholar cited or quoted from it. The preservation service enables verifiability of claims supported by the cited sources even when the original web pages are being revised, removed, or disappear for other reasons, an effect known as link rot.
As of July 2019, WebCite no longer accepts new archiving requests but continues to serve existing archives.
WebCite is a non-profit consortium supported by publishers and editors,[ who? ] and it can be used by individuals without charge.[ clarification needed ] It was one of the first services to offer on-demand archiving of pages, a feature later adopted by many other archiving services. It does not do web page crawling.
Conceived in 1997 by Gunther Eysenbach, WebCite was publicly described the following year when an article on Internet quality control declared that such a service could also measure the citation impact of web pages.In the next year, a pilot service was set up at the address webcite.net. Although it seemed that the need for WebCite decreased when Google's short term copies of web pages began to be offered by Google Cache and the Internet Archive expanded their crawling (which started in 1996), WebCite was the only one allowing "on-demand" archiving by users. WebCite also offered interfaces to scholarly journals and publishers to automate the archiving of cited links. By 2008, over 200 journals had begun routinely using WebCite.
WebCite used to be, but is no longer, a member of the International Internet Preservation Consortium.In a 2012 message on Twitter, Eysenbach commented that "WebCite has no funding, and IIPC charges €4000 per year in annual membership fees."
WebCite "feeds its content" to other digital preservation projects, including the Internet Archive.Lawrence Lessig, an American academic who writes extensively on copyright and technology, used WebCite in his amicus brief in the Supreme Court of the United States case of MGM Studios, Inc. v. Grokster, Ltd.
WebCite ran a fund-raising campaign using FundRazr from January 2013 with a target of $22,500, a sum which its operators stated was needed to maintain and modernize the service beyond the end of 2013. As of 2013 [update] it remained undecided whether WebCite would continue as a non-profit or as a for-profit entity.This includes relocating the service to Amazon EC2 cloud hosting and legal support.
WebCite allows on-demand prospective archiving. It is not crawler-based; pages are only archived if the citing author or publisher requests it. No cached copy will appear in a WebCite search unless the author or another person has specifically cached it beforehand.
To initiate the caching and archiving of a page, an author may use WebCite's "archive" menu option or use a WebCite bookmarklet that will allow web surfers to cache pages just by clicking a button in their bookmarks folder.
One can retrieve or cite archived pages through a transparent format such as
URL is the URL that was archived, and
DATE indicates the caching date. For example,
or the alternate short form
http://webcitation.org/5W56XTY5h retrieves an archived copy of the URL
http://en.wikipedia.org/wiki/Main_Page that is closest to the date of March 4, 2008. The ID (5W56XTY5h) is the UNIX time in base 62.
WebCite does not work for pages which contain a no-cache tag. WebCite respects the author's request to not have their web page cached.
One can archive a page by simply navigating in their browser to a link formatted like this:
Compared to Wayback Machine
urltoarchive with the full URL of the page to be archived, and
youremail with their e-mail address. This is how the WebCite bookmarklet works.
Once archived on WebCite, users can try to create an independent second-level backup copy of the starting URL, saving a second time the new WebCite's domain URL on web.archive.org, and on archive.is. Users can more conveniently do this using a browser add-on for archiving.
The term "WebCite" is a registered trademark.WebCite does not charge individual users, journal editors and publishers any fee to use their service. WebCite earns revenue from publishers who want to "have their publications analyzed and cited webreferences archived", and accepts donations. Early support was from the University of Toronto.
WebCite maintains the legal position that its archiving activitiesare allowed by the copyright doctrines of fair use and implied license. To support the fair use argument, WebCite notes that its archived copies are transformative, socially valuable for academic research, and not harmful to the market value of any copyrighted work. WebCite argues that caching and archiving web pages is not considered a copyright infringement when the archiver offers the copyright owner an opportunity to "opt-out" of the archive system, thus creating an implied license. To that end, WebCite will not archive in violation of Web site "do-not-cache" and "no-archive" metadata, as well as robot exclusion standards, the absence of which creates an "implied license" for web archive services to preserve the content.
In a similar case involving Google's web caching activities, on January 19, 2006, the United States District Court for the District of Nevada agreed with that argument in the case of Field v. Google (CV-S-04-0413-RCJ-LRL), holding that fair use and an "implied license" meant that Google's caching of Web pages did not constitute copyright violation.The "implied license" referred to general Internet standards.
According to their policy, after receiving legitimate DMCA requests from the copyright holders, WebCite removes saved pages from public access, as the archived pages are still under the safe harbor of being citations. The pages are removed to a "dark archive" and in cases of legal controversies or evidence requests there is pay-per-view access of "$200 (up to 5 snapshots) plus $100 for each further 10 snapshots" to the copyrighted content.
The World Wide Web (WWW), commonly known as the Web, is an information system where documents and other web resources are identified by Uniform Resource Locators, which may be interlinked by hypertext, and are accessible over the Internet. The resources of the Web are transferred via the Hypertext Transfer Protocol (HTTP) and may be accessed by users by a software application called a web browser and are published by a software application called a web server. The World Wide Web is not synonymous with the Internet, which pre-existed the Web in some form by over two decades and upon whose technologies the Web is built.
The HTTP 404, 404 Not Found, 404, 404 Error, Page Not Found, File Not Found, or Server Not Found error message is a Hypertext Transfer Protocol (HTTP) standard response code, in computer network communications, to indicate that the browser was able to communicate with a given server, but the server could not find what was requested. The error may also be used when a server does not wish to disclose whether it has the requested information.
In computer networking, a proxy server is a server application or appliance that acts as an intermediary for requests from clients seeking resources from servers that provide those resources. A proxy server thus functions on behalf of the client when requesting service, potentially masking the true origin of the request to the resource server.
In the context of the World Wide Web, deep linking is the use of a hyperlink that links to a specific, generally searchable or indexed, piece of web content on a website, rather than the website's home page. The URL contains all the information needed to point to a particular item.
Inline linking is the use of a linked object, often an image, on one site by a web page belonging to a second site. One site is said to have an inline link to the other site where the object is located.
Link rot is the phenomenon of hyperlinks tending over time to cease to point to their originally targeted file, web page, or server due to that resource being relocated or becoming permanently unavailable. A link that no longer points to its target, often called a broken or dead link, is a specific form of dangling pointer.
A Web cache is an information technology for the temporary storage (caching) of Web documents, such as Web pages, images, and other types of Web multimedia, to reduce server lag. A Web cache system stores copies of documents passing through it; subsequent requests may be satisfied from the cache if certain conditions are met. A Web cache system can refer either to an appliance or to a computer program.
A persistent uniform resource locator (PURL) is a uniform resource locator (URL) that is used to redirect to the location of the requested web resource. PURLs redirect HTTP clients using HTTP status codes.
The LOCKSS project, under the auspices of Stanford University, is a peer-to-peer network that develops and supports an open source system allowing libraries to collect, preserve and provide their readers with access to material published on the Web. Its main goal is digital preservation.
URL shortening is a technique on the World Wide Web in which a Uniform Resource Locator (URL) may be made substantially shorter and still direct to the required page. This is achieved by using a redirect which links to the web page that has a long URL. For example, the URL "https://example.com/assets/category_B/subcategory_C/Foo/" can be shortened to "https://example.com/Foo", and the URL "https://en.wikipedia.org/wiki/URL_shortening" can be shortened to "https://w.wiki/U". Often the redirect domain name is shorter than the original one. A friendly URL may be desired for messaging technologies that limit the number of characters in a message, for reducing the amount of typing required if the reader is copying a URL from a print source, for making it easier for a person to remember, or for the intention of a permalink. In November 2009, the shortened links of the URL shortening service Bitly were accessed 2.1 billion times.
The ISC license is a permissive free software license published by the Internet Software Consortium, nowadays called Internet Systems Consortium (ISC). It is functionally equivalent to the simplified BSD and MIT licenses, but without language deemed unnecessary following the Berne Convention.
Web archiving is the process of collecting portions of the World Wide Web to ensure the information is preserved in an archive for future researchers, historians, and the public. Web archivists typically employ web crawlers for automated capture due to the massive size and amount of information on the Web. The largest web archiving organization based on a bulk crawling approach is the Wayback Machine, which strives to maintain an archive of the entire Web.
Internet censorship is the control or suppression of what can be accessed, published, or viewed on the Internet enacted by regulators, or on their own initiative. Individuals and organizations may engage in self-censorship for moral, religious, or business reasons, to conform to societal norms, due to intimidation, or out of fear of legal or other consequences.
Most Internet censorship in Thailand prior to the September 2006 military coup d'état was focused on blocking pornographic websites. The following years have seen a constant stream of sometimes violent protests, regional unrest, emergency decrees, a new cybercrimes law, and an updated Internal Security Act. Year by year Internet censorship has grown, with its focus shifting to lèse majesté, national security, and political issues. By 2010, estimates put the number of websites blocked at over 110,000. In December 2011, a dedicated government operation, the Cyber Security Operation Center, was opened. Between its opening and March 2014, the Center told ISPs to block 22,599 URLs.
Field v. Google, Inc., 412 F.Supp. 2d 1106 is a case where Google Inc. successfully defended a lawsuit for copyright infringement. Field argued that Google infringed his exclusive right to reproduce his copyrighted works when it "cached" his website and made a copy of it available on its search engine. Google raised multiple defenses: fair use, implied license, estoppel, and Digital Millennium Copyright Act safe harbor protection. The court granted Google's motion for summary judgment and denied Field's motion for summary judgment.
The Wayback Machine is a digital archive of the World Wide Web, founded by the Internet Archive, a nonprofit organization based in San Francisco. It allows the user to go “back in time” and see what websites looked like in the past. Its founders, Brewster Kahle and Bruce Gilliat, developed the Wayback Machine with the intention of providing "universal access to all knowledge" by preserving archived copies of defunct webpages.
Gerrit is a free, web-based team code collaboration tool. Software developers in a team can review each other's modifications on their source code using a Web browser and approve or reject those changes. It integrates closely with Git, a distributed version control system.
Search engine cache is a cache of web pages that shows the page as it was when it was indexed by a web crawler. Cached versions of web pages can be used to view the contents of a page when the live version cannot be reached, has been altered or taken down.
Dr. Eysenbach is trying to decide whether Webcite should continue as a non-profit project or a business with revenue streams built into the system.
Besides Perma, there are many other preserving systems. WebCite is another one[...].
Membership is currently free