Wayback Machine

Last updated

Wayback Machine
Wayback Machine logo 2010.svg
Wayback Machine homepage on November 2015
Type of site
Owner Internet Archive
Website web.archive.org
Alexa rankDecrease Positive.svg 253 (February 2019) [1]
LaunchedOctober 24, 2001;17 years ago (2001-10-24) [2] [3]
Current statusActive
Written in Java, Python

The Wayback Machine is a digital archive of the World Wide Web and other information on the Internet. It was launched in 2001 by the Internet Archive, a nonprofit organization based in San Francisco, California, United States.

Archive institution responsible for storing, preserving, describing, and providing access to historical records

An archive is an accumulation of historical records or the physical place they are located. Archives contain primary source documents that have accumulated over the course of an individual or organization's lifetime, and are kept to show the function of that person or organization. Professional archivists and historians generally understand archives to be records that have been naturally and necessarily generated as a product of regular legal, commercial, administrative, or social activities. They have been metaphorically defined as "the secretions of an organism", and are distinguished from documents that have been consciously written or created to communicate a particular message to posterity.

World Wide Web system of interlinked hypertext documents accessed via the Internet

The World Wide Web (WWW), commonly known as the Web, is an information space where documents and other web resources are identified by Uniform Resource Locators, which may be interlinked by hypertext, and are accessible via the Internet. The resources of the WWW may be accessed by users via a software application called a web browser.

Internet Global system of connected computer networks

The Internet is the global system of interconnected computer networks that use the Internet protocol suite (TCP/IP) to link devices worldwide. It is a network of networks that consists of private, public, academic, business, and government networks of local to global scope, linked by a broad array of electronic, wireless, and optical networking technologies. The Internet carries a vast range of information resources and services, such as the inter-linked hypertext documents and applications of the World Wide Web (WWW), electronic mail, telephony, and file sharing.



Internet Archive founders Brewster Kahle and Bruce Gilliat launched the Wayback Machine in 2001 to address the problem of website content vanishing whenever it gets changed or shut down. [4] The service enables users to see archived versions of web pages across time, which the archive calls a "three dimensional index". [5] Kahle and Gilliat created the machine hoping to archive the entire Internet and provide "universal access to all knowledge." [6]

Internet Archive US non-profit organization founded in 1996 in San Francisco by Brewster Kahle

The Internet Archive is a San Francisco–based nonprofit digital library with the stated mission of "universal access to all knowledge." It provides free public access to collections of digitized materials, including websites, software applications/games, music, movies/videos, moving images, and millions of public-domain books. In addition to its archiving function, the Archive is an activist organization, advocating for a free and open Internet.

Brewster Kahle American computer engineer, founder of the Internet Archive

Brewster Kahle is an American computer engineer, Internet entrepreneur, internet activist, advocate of universal access to all knowledge, and digital librarian. Kahle founded the Internet Archive and Alexa. In 2012 he was inducted into the Internet Hall of Fame.

Bruce Gilliat is co-founder and former chief executive officer of Alexa Internet.

The name Wayback Machine was chosen as a reference to the "WABAC machine" (pronounced way-back), a time-traveling device used by the characters Mr. Peabody and Sherman in The Rocky and Bullwinkle Show , an animated cartoon. [7] [8] In one of the animated cartoon's component segments, Peabody's Improbable History , the characters routinely used the machine to witness, participate in, and, more often than not, alter famous events in history.

The WABAC Machine or Wayback Machine is a fictional time machine from the segment "Peabody's Improbable History", a recurring feature of the 1960s cartoon series The Rocky and Bullwinkle Show. The WABAC Machine is a plot device used to transport the characters Mr. Peabody and Sherman back in time to visit important events in human history.

The Wayback Machine began archiving cached web pages in 1996, with the goal of making the service public five years later. [9] From 1996 to 2001, the information was kept on digital tape, with Kahle occasionally allowing researchers and scientists to tap into the clunky database. [10] When the archive reached its fifth anniversary in 2001, it was unveiled and opened to the public in a ceremony at the University of California, Berkeley. [11] By the time the Wayback Machine launched, it already contained over 10 billion archived pages. [12]

Cache (computing) computing component that transparently stores data so that future requests for that data can be served faster

In computing, a cache is a hardware or software component that stores data so that future requests for that data can be served faster; the data stored in a cache might be the result of an earlier computation or a copy of data stored elsewhere. A cache hit occurs when the requested data can be found in a cache, while a cache miss occurs when it cannot. Cache hits are served by reading data from the cache, which is faster than recomputing a result or reading from a slower data store; thus, the more requests that can be served from the cache, the faster the system performs.

University of California, Berkeley Public university in California, USA

The University of California, Berkeley is a public research university in the United States. Located in the city of Berkeley, it was founded in 1868 and serves as the flagship institution of the ten research universities affiliated with the University of California system. Berkeley has since grown to instruct over 40,000 students in approximately 350 undergraduate and graduate degree programs covering numerous disciplines.

Today, the data is stored on the Internet Archive's large cluster of Linux nodes. [6] It revisits and archives new versions of websites on occasion (see technical details below). [13] Sites can also be captured manually by entering a website's URL into the search box, provided that the website allows the Wayback Machine to "crawl" it and save the data. [9]

Linux Family of free and open-source software operating systems based on the Linux kernel

Linux is a family of free and open-source software operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution.

A Uniform Resource Locator (URL), colloquially termed a web address, is a reference to a web resource that specifies its location on a computer network and a mechanism for retrieving it. A URL is a specific type of Uniform Resource Identifier (URI), although many people use the two terms interchangeably. Thus http://www.example.com is a URL, while www.example.com is not.</ref> URLs occur most commonly to reference web pages (http), but are also used for file transfer (ftp), email (mailto), database access (JDBC), and many other applications.

Technical details

Software has been developed to "crawl" the web and download all publicly accessible World Wide Web pages, the Gopher hierarchy, the Netnews (Usenet) bulletin board system, and downloadable software. [14] The information collected by these "crawlers" does not include all the information available on the Internet, since much of the data is restricted by the publisher or stored in databases that are not accessible. To overcome inconsistencies in partially cached websites, Archive-It.org was developed in 2005 by the Internet Archive as a means of allowing institutions and content creators to voluntarily harvest and preserve collections of digital content, and create digital archives. [15]

Web crawler Internet bot that systematically browses the World Wide Web, typically for the purpose of Web indexing (web spidering)

A Web crawler, sometimes called a spider or spiderbot and often shortened to crawler, is an Internet bot that systematically browses the World Wide Web, typically for the purpose of Web indexing.

The Gopher protocol is a TCP/IP application layer protocol designed for distributing, searching, and retrieving documents over the Internet. The Gopher protocol was strongly oriented towards a menu-document design and presented an alternative to the World Wide Web in its early stages, but ultimately Hypertext Transfer Protocol (HTTP) became the dominant protocol. The Gopher ecosystem is often regarded as the effective predecessor of the World Wide Web.

Crawls are contributed from various sources, some imported from third parties and others generated internally by the Archive. [13] For example, crawls are contributed by the Sloan Foundation and Alexa, crawls run by IA on behalf of NARA and the Internet Memory Foundation, mirrors of Common Crawl. [13] The "Worldwide Web Crawls" have been running since 2010 and capture the global Web. [16] [13]

The frequency of snapshot captures varies per website. [13] Websites in the "Worldwide Web Crawls" are included in a "crawl list", with the site archived once per crawl. [13] A crawl can take months or even years to complete depending on size. [13] For example, "Wide Crawl Number 13" started on January 9, 2015, and completed on July 11, 2016. [17] However, there may be multiple crawls ongoing at any one time, and a site might be included in more than one crawl list, so how often a site is crawled varies widely. [13]

Storage capacity and growth

As technology has developed over the years, the storage capacity of the Wayback Machine has grown. In 2003, after only two years of public access, the Wayback Machine was growing at a rate of 12 terabytes/month. The data is stored on PetaBox rack systems custom designed by Internet Archive staff. The first 100TB rack became fully operational in June 2004, although it soon became clear that they would need much more storage than that. [18] [19]

The Internet Archive migrated its customized storage architecture to Sun Open Storage in 2009, and hosts a new data center in a Sun Modular Datacenter on Sun Microsystems' California campus. [20] As of 2009, the Wayback Machine contained approximately three petabytes of data and was growing at a rate of 100 terabytes each month. [21]

A new, improved version of the Wayback Machine, with an updated interface and a fresher index of archived content, was made available for public testing in 2011. [22] In March that year, it was said on the Wayback Machine forum that "the Beta of the new Wayback Machine has a more complete and up-to-date index of all crawled materials into 2010, and will continue to be updated regularly. The index driving the classic Wayback Machine only has a little bit of material past 2008, and no further index updates are planned, as it will be phased out this year." [23] Also in 2011, the Internet Archive installed their sixth pair of PetaBox racks which increased the Wayback Machine's storage capacity by 700 terabytes. [24]

In January 2013, the company announced a ground-breaking milestone of 240 billion URLs. [25] In October 2013, the company announced the "Save a Page" feature [26] which allows any Internet user to archive the contents of a URL. This became a threat of abuse by the service for hosting malicious binaries. [27] [28]

As of December 2014, the Wayback Machine contained 435 billion web pages—almost nine petabytes of data, and was growing at about 20 terabytes a week. [29] [12] [30]

As of July 2016, the Wayback Machine reportedly contained around 15 petabytes of data. [31]

As of September 2018, the Wayback Machine contained more than 25 petabytes of data. [32] [33]


Between October 2013 and March 2015, the website's global Alexa rank changed from 163 [34] to 208. [35]

Wayback Machine Growth [36] [37]
Wayback Machine by YearPages Archived (billion)

Website exclusion policy

Historically, Wayback Machine has respected the robots exclusion standard (robots.txt) in determining if a website would be crawled or not; or if already crawled, if its archives would be publicly viewable. Website owners had the option to opt-out of Wayback Machine through the use of robots.txt. It applied robots.txt rules retroactively; if a site blocked the Internet Archive, any previously archived pages from the domain were immediately rendered unavailable as well. In addition, the Internet Archive stated that "Sometimes a website owner will contact us directly and ask us to stop crawling or archiving a site. We comply with these requests." [38] In addition, the website says: "The Internet Archive is not interested in preserving or offering access to Web sites or other Internet documents of persons who do not want their materials in the collection." [39] [40]

Oakland Archive Policy

Wayback's retroactive exclusion policy is based in part upon Recommendations for Managing Removal Requests and Preserving Archival Integrity published by the School of Information Management and Systems at University of California, Berkeley in 2002, which gives a website owner the right to block access to the site's archives. [41] Wayback has complied with this policy to help avoid expensive litigation. [42]

The Wayback retroactive exclusion policy began to relax in 2017, when it stopped honoring robots.txt on U.S. government and military web sites for both crawling and displaying web pages. As of April 2017, Wayback is ignoring robots.txt more broadly, not just for U.S. government websites. [43] [44] [45] [46]


From its public launch in 2001, the Wayback Machine has been studied by scholars both for the ways it stores and collects data as well as for the actual pages contained in its archive. As of 2013, scholars had written about 350 articles on the Wayback Machine, mostly from the information technology, library science, and social science fields. Social science scholars have used the Wayback Machine to analyze how the development of websites from the mid-1990s to the present has affected the company's growth. [12]

When the Wayback Machine archives a page, it usually includes most of the hyperlinks, keeping those links active when they just as easily could have been broken by the Internet's instability. Researchers in India studied the effectiveness of the Wayback Machine's ability to save hyperlinks in online scholarly publications and found that it saved slightly more than half of them. [47]

Journalists use the Wayback Machine to view dead websites, dated news reports, and changes to website contents. Its content has been used to hold politicians accountable and expose battlefield lies. [48] In 2014, an archived social media page of Igor Girkin, a separatist rebel leader in Ukraine, showed him boasting about his troops having shot down a suspected Ukrainian military airplane before it became known that the plane actually was a civilian Malaysian Airlines jet, after which he deleted the post and blamed Ukraine's military for downing the plane. [48] [49] In 2017, the March for Science originated from a discussion on reddit that indicated someone had visited Archive.org and discovered that all references to climate change had been deleted from the White House website. In response, a user commented, "There needs to be a Scientists' March on Washington". [50] [51] [52]

Furthermore, the site is used heavily for verification, providing access to references and content creation by Wikipedia editors.[ citation needed ]


Despite its capabilities, the Wayback Machine also has some limitations. In 2014 there was a six-month lag time between when a website is crawled and when it is available for viewing in the Wayback Machine. [53] Currently, the lag time is 3 to 10 hours. [54] The Wayback Machine is not "historical Google"; users must know the URL of the websites they want to see. [55] It does have a "Site Search" feature that allows users to find a site based on words describing the site, rather than words found on the web pages themselves.

The Wayback Machine does not include every web page ever made due to the limitations of its web crawler. The Wayback Machine cannot completely archive web pages that contain interactive features like Flash platforms and forms written in JavaScript, because those functions require interaction with the host website. Their web crawler has difficulty extracting anything not coded in HTML (or one of its variants) which often results in broken hyperlinks and missing images. Furthermore, the web crawler cannot archive "orphan pages" that contain no links to other pages. [56] [55] Specific rules governing the Wayback Machine's crawler can only follow a predetermined number of hyperlinks based on a preset depth limit, so it cannot archive every hyperlink on every page. [16]

Some owners place a robot.txt file on their website which prevents the Wayback Machine from discovering and archiving it. Furthermore, website owners can also contact the Internet Archive directly and request that their pages be excluded from the archive. [56]

Civil litigation

Netbula LLC v. Chordiant Software Inc.

In a 2009 case, Netbula, LLC v. Chordiant Software Inc., defendant Chordiant filed a motion to compel Netbula to disable the robots.txt file on its website that was causing the Wayback Machine to retroactively remove access to previous versions of pages it had archived from Netbula's site, pages that Chordiant believed would support its case. [57]

Netbula objected to the motion on the ground that defendants were asking to alter Netbula's website and that they should have subpoenaed Internet Archive for the pages directly. [58] An employee of Internet Archive filed a sworn statement supporting Chordiant's motion, however, stating that it could not produce the web pages by any other means "without considerable burden, expense and disruption to its operations." [57]

Magistrate Judge Howard Lloyd in the Northern District of California, San Jose Division, rejected Netbula's arguments and ordered them to disable the robots.txt blockage temporarily in order to allow Chordiant to retrieve the archived pages that they sought. [57]

Telewizja Polska

In an October 2004 case, Telewizja Polska USA, Inc. v. Echostar Satellite, No. 02 C 3293, 65 Fed. R. Evid. Serv. 673 (N.D. Ill. October 15, 2004), a litigant attempted to use the Wayback Machine archives as a source of admissible evidence, perhaps for the first time. Telewizja Polska is the provider of TVP Polonia and EchoStar operates the Dish Network. Prior to the trial proceedings, EchoStar indicated that it intended to offer Wayback Machine snapshots as proof of the past content of Telewizja Polska's website. Telewizja Polska brought a motion in limine to suppress the snapshots on the grounds of hearsay and unauthenticated source, but Magistrate Judge Arlander Keys rejected Telewizja Polska's assertion of hearsay and denied TVP's motion in limine to exclude the evidence at trial. [59] [60] At the trial, however, District Court Judge Ronald Guzman, the trial judge, overruled Magistrate Keys' findings,[ citation needed ] and held that neither the affidavit of the Internet Archive employee nor the underlying pages (i.e., the Telewizja Polska website) were admissible as evidence. Judge Guzman reasoned that the employee's affidavit contained both hearsay and inconclusive supporting statements, and the purported web page, printouts were not self-authenticating.[ citation needed ]

Patent law

Provided some additional requirements are met (e.g., providing an authoritative statement of the archivist), the United States patent office and the European Patent Office will accept date stamps from the Internet Archive as evidence of when a given Web page was accessible to the public. These dates are used to determine if a Web page is available as prior art for instance in examining a patent application. [61]

Limitations of utility

There are technical limitations to archiving a website, and as a consequence, it is possible for opposing parties in litigation to misuse the results provided by website archives. This problem can be exacerbated by the practice of submitting screenshots of web pages in complaints, answers, or expert witness reports when the underlying links are not exposed and therefore, can contain errors. For example, archives such as the Wayback Machine do not fill out forms and therefore, do not include the contents of non-RESTful e-commerce databases in their archives. [62]

In Europe, the Wayback Machine could be interpreted as violating copyright laws. Only the content creator can decide where their content is published or duplicated, so the Archive would have to delete pages from its system upon request of the creator. [63] The exclusion policies for the Wayback Machine may be found in the FAQ section of the site. [64]

A number of cases have been brought against the Internet Archive specifically for its Wayback Machine archiving efforts.


In late 2002, the Internet Archive removed various sites that were critical of Scientology from the Wayback Machine. [65] An error message stated that this was in response to a "request by the site owner". [66] Later, it was clarified that lawyers from the Church of Scientology had demanded the removal and that the site owners did not want their material removed. [67]

Healthcare Advocates, Inc.

In 2003, Harding Earley Follmer & Frailey defended a client from a trademark dispute using the Archive's Wayback Machine. The attorneys were able to demonstrate that the claims made by the plaintiff were invalid, based on the content of their website from several years prior. The plaintiff, Healthcare Advocates, then amended their complaint to include the Internet Archive, accusing the organization of copyright infringement as well as violations of the DMCA and the Computer Fraud and Abuse Act. Healthcare Advocates claimed that, since they had installed a robots.txt file on their website, even if after the initial lawsuit was filed, the Archive should have removed all previous copies of the plaintiff website from the Wayback Machine, however, some material continued to be publicly visible on Wayback. [68] The lawsuit was settled out of court, after Wayback fixed the problem. [69]

Suzanne Shell

Activist Suzanne Shell filed suit in December 2005, demanding Internet Archive pay her US$100,000 for archiving her website profane-justice.org between 1999 and 2004. [70] [71] Internet Archive filed a declaratory judgment action in the United States District Court for the Northern District of California on January 20, 2006, seeking a judicial determination that Internet Archive did not violate Shell's copyright. Shell responded and brought a countersuit against Internet Archive for archiving her site, which she alleges is in violation of her terms of service. [72] On February 13, 2007, a judge for the United States District Court for the District of Colorado dismissed all counterclaims except breach of contract. [71] The Internet Archive did not move to dismiss copyright infringement claims Shell asserted arising out of its copying activities, which would also go forward. [73]

On April 25, 2007, Internet Archive and Suzanne Shell jointly announced the settlement of their lawsuit. [70] The Internet Archive said it "...has no interest in including materials in the Wayback Machine of persons who do not wish to have their Web content archived. We recognize that Ms Shell has a valid and enforceable copyright in her Web site and we regret that the inclusion of her Web site in the Wayback Machine resulted in this litigation." Shell said, "I respect the historical value of Internet Archive's goal. I never intended to interfere with that goal nor cause it any harm." [74]

Daniel Davydiuk

Between 2013 and 2016, a pornographic actor tried to remove archived images of himself from the Wayback Machine's archive, first by sending multiple DMCA requests to the archive, and then by appealing to the Federal Court of Canada. [75] [76] [77]

Censorship and other threats

Archive.org is currently blocked in China. [78] [79] After the site enabled the encrypted HTTPS protocol, the Internet Archive was blocked in its entirety in Russia in 2015. [80] [81] [48] [ needs update? ]

Alison Macrina, director of the Library Freedom Project, notes that "while librarians deeply value individual privacy, we also strongly oppose censorship". [48]

There are known rare cases where online access to content which "for nothing" has put people in danger was disabled by the website. [48]

Other threats include natural disasters, [82] destruction (remote or physical),[ citation needed ] manipulation of the archive's contents (see also: cyberattack, backup), problematic copyright laws [83] and surveillance of the site's users. [84]

Kevin Vaughan suspects that in the long-term of multiple generations "next to nothing" will survive in a useful way besides "if we have continuity in our technological civilization" by which "a lot of the bare data will remain findable and searchable". [85]

Some[ who? ] find the Internet Archive, which describes itself to be built for the long-term, [86] to be working furiously to capture data before it disappears without any long-term infrastructure to speak of. [87]

See also

Related Research Articles

Meta elements are tags used in HTML and XHTML documents to provide structured metadata about a Web page. They are part of a web page's head section. Multiple Meta elements with different attributes can be used on the same page. Meta elements can be used to specify page description, keywords and any other metadata not provided through the other head elements and attributes.

The robots exclusion standard, also known as the robots exclusion protocol or simply robots.txt, is a standard used by websites to communicate with web crawlers and other web robots. The standard specifies how to inform the web robot about which areas of the website should not be processed or scanned. Robots are often used by search engines to categorize websites. Not all robots cooperate with the standard; email harvesters, spambots, malware and robots that scan for security vulnerabilities may even start with the portions of the website where they have been told to stay out. The standard is different from but can be used in conjunction with, Sitemaps, a robot inclusion standard for websites.

Search engine optimization (SEO) is the process of affecting the online visibility of a website or a web page in a web search engine's unpaid results—often referred to as "natural", "organic", or "earned" results. In general, the earlier, and more frequently a website appears in the search results list, the more visitors it will receive from the search engine's users; these visitors can then be converted into customers. SEO may target different kinds of search, including image search, video search, academic search, news search, and industry-specific vertical search engines. SEO differs from local search engine optimization in that the latter is focused on optimizing a business' online presence so that its web pages will be displayed by search engines when a user enters a local search for its products or services. The former instead is more focused on national or international searches.

In the context of the World Wide Web, deep linking is the use of a hyperlink that links to a specific, generally searchable or indexed, piece of web content on a website, rather than the website's home page. The URL contains all the information needed to point to a particular item, in this case the "Example" section of the English Wikipedia article entitled "Deep linking", as opposed to only the information needed to point to the highest-level home page of Wikipedia at https://www.wikipedia.org/.

Grub (search engine) open source distributed search crawler platform

Grub was an open source distributed search crawler platform.

goatse.cx, often referred to simply as "Goatse", was originally an Internet shock site. Its front page featured a picture, entitled hello.jpg, showing a naked man widely stretching his anus with both hands.

Alexa Internet American analytics company providing web traffic data

Alexa Internet, Inc. is an American web traffic analysis company based in San Francisco. It is a subsidiary of Amazon.

Time Cube

Time Cube was a personal web page, founded in 1997 by the self-proclaimed "wisest man on earth", Otis Eugene "Gene" Ray. It was a self-published outlet for Ray's theory of everything, called "Time Cube", which claims that all modern sciences are participating in a worldwide conspiracy to teach lies, by omitting his theory's alleged truth that each day actually consists of four days. Alongside these statements, Ray described himself as a "godlike being with superior intelligence who has absolute evidence and proof" for his views. Ray asserted repeatedly and variously that "academia" had not taken Time Cube seriously.

Apache Nutch open source web crawler software

Apache Nutch is a highly extensible and scalable open source web crawler software project.

A site map is a list of pages of a web site.

Web archiving is the process of collecting portions of the World Wide Web to ensure the information is preserved in an archive for future researchers, historians, and the public. Web archivists typically employ web crawlers for automated capture due to the massive size and amount of information on the Web. The largest web archiving organization based on a bulk crawling approach is the Internet Archive, which strives to maintain an archive of the entire Web.

WebCite is an on-demand archiving service, designed to digitally preserve scientific and educationally important material on the web by making snapshots of Internet contents as they existed at the time when a blogger, or a scholar or a Wikipedia editor cited or quoted from it. The preservation service enables verifiability of claims supported by the cited sources even when the original web pages are being revised, removed, or disappear for other reasons, an effect known as link rot.

BotSeer was a Web-based information system and search tool used for research on Web robots and trends in Robot Exclusion Protocol deployment and adherence. It was created and designed by Yang Sun, Isaac G. Councill, Ziming Zhuang and C. Lee Giles. BotSeer is now inactive; the original URL was https://web.archive.org/web/20100208214818/http://botseer.ist.psu.edu/

blekko web search engine

Blekko, trademarked as blekko (lowercase), was a company that provided a web search engine with the stated goal of providing better search results than those offered by Google Search, with results gathered from a set of 3 billion trusted webpages and excluding such sites as content farms. The company's site, launched to the public on November 1, 2010, used slashtags to provide results for common searches. Blekko also offered a downloadable search bar. It was acquired by IBM in March 2015, and the service was discontinued.

archive.today Online web archive

archive.today is an archive site which stores snapshots of web pages. It retrieves one page at a time similar to WebCite, smaller than 50MB each, but with support for modern sites such as Google Maps and Twitter.


  1. "Archive.org Traffic, Demographics and Competitors - Alexa". www.alexa.com. Retrieved 4 February 2019.
  2. "WayBackMachine.org WHOIS, DNS, & Domain Info – DomainTools". WHOIS . Retrieved March 13, 2016.
  3. "InternetArchive.org WHOIS, DNS, & Domain Info – DomainTools". WHOIS . Retrieved March 13, 2016.
  4. Notess, Greg R. (March–April 2002). "The Wayback Machine: The Web's Archive". Online. 26: 59–61 via EBSCOhost.
  5. "The Wayback Machine", Frequently Asked Questions, archived from the original on September 18, 2018, retrieved September 18, 2018
  6. 1 2 "20,000 Hard Drives on a Mission | Internet Archive Blogs". blog.archive.org. Archived from the original on October 20, 2018. Retrieved October 15, 2018.
  7. Green, Heather (February 28, 2002). "A Library as Big as the World". BusinessWeek. Archived from the original on December 20, 2011.
  8. Tong, Judy (September 8, 2002). "Responsible Party – Brewster Kahle; A Library Of the Web, On the Web". New York Times. Archived from the original on February 20, 2011. Retrieved August 15, 2011.
  9. 1 2 "Internet Archive: Wayback Machine". archive.org. Archived from the original on January 3, 2014. Retrieved October 15, 2018.
  10. Cook, John (November 1, 2001). "Web site takes you way back in Internet history". Seattle Post-Intelligencer. Archived from the original on August 12, 2014. Retrieved August 15, 2011.
  11. "Wayback Goes Way Back on Web". Wired. October 28, 2001. Archived from the original on October 16, 2017. Retrieved October 16, 2017.
  12. 1 2 3 Arora, Sanjay K.; Li, Yin; Youtie, Jan; Shapira, Philip (May 5, 2015). "Using the wayback machine to mine websites in the social sciences: A methodological resource". Journal of the Association for Information Science and Technology. 67 (8): 1904–1915. doi:10.1002/asi.23503. ISSN   2330-1635.
  13. 1 2 3 4 5 6 7 8 Kalev Leetaru (January 28, 2016). "The Internet Archive Turns 20: A Behind the Scenes Look at Archiving the Web". Forbes. Archived from the original on October 16, 2017. Retrieved October 16, 2017.
  14. Kahle, Brewster. "Archiving the Internet". Scientific American – March 1997 Issue. Archived from the original on April 3, 2012. Retrieved August 19, 2011.
  15. Jeff Kaplan (October 27, 2014). "Archive-It: Crawling the Web Together". Internet Archive Blogs. Archived from the original on October 12, 2017. Retrieved October 16, 2017.
  16. 1 2 "Worldwide Web Crawls". Internet Archive. Archived from the original on October 19, 2017. Retrieved October 16, 2017.
  17. "Wide Crawl Number 13". Internet Archive. Archived from the original on October 19, 2017. Retrieved October 16, 2017.
  18. "Internet Archive: Petabox". archive.org. Retrieved October 25, 2018.
  19. Kanellos, Michael (July 29, 2005). "Big storage on the cheap". CNET News.com. Archived from the original on April 3, 2007. Retrieved July 29, 2007.
  20. "Internet Archive and Sun Microsystems Create Living History of the Internet". Sun Microsystems. March 25, 2009. Archived from the original on March 26, 2009. Retrieved March 27, 2009.
  21. Mearian, Lucas (March 19, 2009). "Internet Archive to unveil massive Wayback Machine data center". Computerworld.com. Archived from the original on March 23, 2009. Retrieved March 22, 2009.
  22. "Updated Wayback Machine in Beta Testing". Archive.org. Archived from the original on August 23, 2011. Retrieved August 19, 2011.
  23. "Beta Wayback Machine, in forum". Archive.org. Archived from the original on April 17, 2014. Retrieved April 16, 2014.
  24. "Internet Archive Forums: 6th pair of racks go into service: over 2PB of data space used". archive.org. Archived from the original on October 24, 2016. Retrieved October 25, 2018.
  25. "Wayback Machine: Now with 240,000,000,000 URLs | Internet Archive Blogs". Blog.archive.org. January 9, 2013. Archived from the original on April 14, 2014. Retrieved April 16, 2014.
  26. Rossi, Alexis (October 25, 2013). "Fixing Broken Links on the Internet". archive.org. San Francisco, CA, US: Collections Team, the Internet Archive. Archived from the original on November 7, 2014. Retrieved March 25, 2015. We have added the ability to archive a page instantly and get back a permanent URL for that page in the Wayback Machine. This service allows anyone – wikipedia editors, scholars, legal professionals, students, or home cooks like me – to create a stable URL to cite, share or bookmark any information they want to still have access to in the future.
  27. The VirusTotal Team (March 25, 2015). " IP address information". virustotal.com. Dublin 2, Ireland: VirusTotal. Archived from the original on July 14, 2014. Retrieved March 25, 2015. 2015-03-25: Latest URLs hosted in this IP address detected by at least one URL scanner or malicious URL dataset. ... 2/62 2015-03-25 16:14:12 [complete URL redacted]/Renegotiating_TLS.pdf ... 1/62 2015-03-25 04:46:34 [complete URL redacted]/CBLightSetup.exe
  28. Advisory provided by Google (March 25, 2015). "Safe Browsing Diagnostic page for archive.org". google.com/safebrowsing. Mountain View, CA, US: Google. Archived from the original on April 6, 2015. Retrieved March 25, 2015. 2015-03-25: Part of this site was listed for suspicious activity 138 time(s) over the past 90 days. ... What happened when Google visited this site? ... Of the 42410 pages we tested on the site over the past 90 days, 450 page(s) resulted in malicious software being downloaded and installed without user consent. The last time Google visited this site was on 2015-03-25, and the last time suspicious content was found on this site was on 2015-03-25. ... Malicious software includes 169 trojan(s), 126 virus, 43 backdoor(s).
  29. "Internet Archive Frequently Asked Questions". Archived from the original on October 21, 2009. Retrieved January 17, 2015.
  30. "Internet Archive Frequently Asked Questions". December 18, 2014. Archived from the original on December 18, 2014. Retrieved December 13, 2018.
  31. "Can the manipulation of big data change the way the world thinks?". The National. Archived from the original on January 12, 2017. Retrieved May 14, 2017.
  32. Crockett, Zachary (September 28, 2018). "Inside Wayback Machine, the internet's time capsule". The Hustle. Archived from the original on October 2, 2018. Retrieved October 26, 2018.
  33. Heffernan, Virginia (September 18, 2018). "Things Break and Decay on the Internet—That's a Good Thing". WIRED. Archived from the original on September 25, 2018. Retrieved October 26, 2018.
  34. "Archive.org Site Info". Alexa Internet. Archived from the original on October 28, 2013. Retrieved October 29, 2013.
  35. "Archive.org Site Overview". Alexa Internet. Archived from the original on April 9, 2015. Retrieved April 9, 2015.
  36. michelle (May 9, 2014). "Wayback Machine Hits 400,000,000,000!". Internet Archive. Archived from the original on August 26, 2014. Retrieved March 25, 2015.
  37. "Internet Archive Wayback Machine". Internet Archive. Archived from the original on February 13, 2015. Retrieved March 25, 2015.
  38. Some sites are not available because of Robots.txt or other exclusions Archived April 15, 2011, at the Wayback Machine
  39. How can I remove my site's pages from the Wayback Machine? Archived April 17, 2014, at the Wayback Machine
  40. Cox, Joseph (May 22, 2018). "The Wayback Machine Is Deleting Evidence of Malware Sold to Stalkers". Archived from the original on May 23, 2018. Retrieved May 23, 2018.
  41. "Recommendations for Managing Removal Requests And Preserving Archival Integrity". University of California. December 14, 2002. Archived from the original on September 18, 2017. Retrieved September 14, 2017.
  42. "Retroactive robots.txt removal of past crawls AKA Oakland Archive Policy". Internet Archive. July 7, 2014. Archived from the original on October 10, 2017. Retrieved September 14, 2017.
  43. Mark Graham (April 17, 2017). "Robots.txt meant for search engines don't work well for web archives". Internet Archive Blogs. Archived from the original on April 17, 2017. Retrieved April 16, 2017.
  44. "Archivierung des Internets: Internet Archive ignoriert künftig robots.txt" (in German). heise online. Archived from the original on April 27, 2017. Retrieved May 14, 2017.
  45. "Suchmaschinen: Internet Archive will künftig Robots.txt-Einträge ignorieren – Golem.de" (in German). Archived from the original on June 19, 2017. Retrieved May 14, 2017.
  46. "Internet Archive will ignore robots.txt files to keep historical record accurate". Digital Trends. April 24, 2017. Archived from the original on May 16, 2017. Retrieved May 14, 2017.
  47. Sampath Kumar, B.T.; Prithviraj, K.R. (October 21, 2014). "Bringing life to dead: Role of Wayback Machine in retrieving vanished URLs". Journal of Information Science. 41 (1): 71–81. doi:10.1177/0165551514552752. ISSN   0165-5515.
  48. 1 2 3 4 5 "Wayback Machine Won't Censor Archive for Taste, Director Says After Olympics Article Scrubbed". Archived from the original on January 6, 2017. Retrieved May 14, 2017.
  49. "What the Web Said Yesterday". The New Yorker. Archived from the original on January 25, 2015. Retrieved May 14, 2017.
  50. "The March for Science began with this person's 'throwaway line' on Reddit". Washington Post. Archived from the original on April 23, 2017. Retrieved April 23, 2017.
  51. "Are scientists going to march on Washington?". The Washington Post. Archived from the original on January 31, 2017. Retrieved January 31, 2017.
  52. Foley, Katherine Ellen. "The global March for Science started with a single Reddit thread". Quartz. Archived from the original on April 24, 2017. Retrieved April 23, 2017.
  53. "Internet Archive Frequently Asked Questions". Internet Archive. April 2, 2014. Archived from the original on 2014-04-02. Retrieved November 23, 2018.
  54. "Internet Archive Frequently Asked Questions". archive.org. Retrieved November 23, 2018.
  55. 1 2 Bates, Mary Ellen (2002). "The Wayback Machine". Online. 26: 80 via EBSCOhost.
  56. 1 2 "Internet Archive Frequently Asked Questions". archive.org. Archived from the original on April 20, 2013. Retrieved October 18, 2018.
  57. 1 2 3 Lloyd, Howard (October 2009). "Order to Disable Robots.txt" (PDF). Retrieved October 15, 2009.
  58. Cortes, Antonio (October 2009). "Motion Opposing Removal of Robots.txt". Archived from the original on October 27, 2010. Retrieved October 15, 2009.
  59. Gelman, Lauren (November 17, 2004). "Internet Archive's Web Page Snapshots Held Admissible as Evidence". Packets. 2 (3). Archived from the original on April 30, 2011. Retrieved January 4, 2007.
  60. Howell, Beryl A. (February 2006). "Proving Web History: How to use the Internet Archive" (PDF). Journal of Internet Law: 3–9. Archived from the original (PDF) on July 5, 2010. Retrieved August 6, 2008.
  61. Wynn W. Coggins (Fall 2002). "Prior Art in the Field of Business Method Patents – When is an Electronic Document a Printed Publication for Prior Art Purposes?". USPTO. Archived from the original on September 21, 2012.
  62. "Debunking the Wayback Machine". Archived from the original on June 29, 2010.
  63. Bahr, Martin (2002). "The Wayback Machine und Google Cache - eine Verletzung deutschen Urheberrechts?". JurPC (in German). doi:10.7328/jurpcb/20021719. Archived from the original on August 23, 2009.
  64. "Internet Archive FAQ". Archive.org. Archived from the original on April 17, 2014. Retrieved April 16, 2014.
  65. Bowman, Lisa M (September 24, 2002). "Net archive silences Scientology critic". CNET News.com. Archived from the original on May 15, 2012. Retrieved January 4, 2007.
  66. Jeff (September 23, 2002). "exclusions from the Wayback Machine" (Blog). Wayback Machine Forum. Internet Archive. Archived from the original on February 11, 2007. Retrieved January 4, 2007.Author and Date indicate initiation of forum thread.
  67. Miller, Ernest. "Sherman, Set the Wayback Machine for Scientology". LawMeme. Yale Law School. Archived from the original (Blog) on November 16, 2012. Retrieved January 4, 2007.
  68. Dye, Jessica (2005). "Website Sued for Controversial Trip into Internet Past". EContent. 28. (11): 8–9.
  69. Bangeman, Eric (August 31, 2006). "Internet Archive Settles Suit Over Wayback Machine". Ars technica. Archived from the original on November 5, 2007. Retrieved November 29, 2007.
  70. 1 2 Internet Archive v. Shell, 505 F.Supp.2d 755at justia.com , 1:2006cv01726( Colorado District Court August 31, 2006)("'April 25, 2007 Settlement agreement announced.' Filing 65, 2007-04-30: '...therefore ORDERED that this matter shall be DISMISSED WITH PREJUDICE...'").
  71. 1 2 Babcock, Lewis T., Chief Judge (February 13, 2007). "Internet Archive v. Shell Civil Action No. 06cv01726LTBCBS" (PDF). Archived (PDF) from the original on January 25, 2014. Retrieved March 25, 2015. 1) Internet Archive's motion to dismiss Shell's counterclaim for conversion and civil theft (Second Cause of Action) is GRANTED, 2) Internet Archive's motion to dismiss Shell's counterclaim for breach of contract (Third Cause of Action) is DENIED; 3) Internet Archive's motion to dismiss Shell's counterclaim for Racketeering under RICO and COCCA (Fourth Cause of Action) is GRANTED.
  72. Claburn, Thomas (March 16, 2007). "Colorado Woman Sues To Hold Web Crawlers To Contracts". New York, NY, US: InformationWeek , UBM Tech, UBM LLC. Archived from the original on September 4, 2014. Retrieved March 25, 2015. Computers can enter into contracts on behalf of people. The Uniform Electronic Transactions Act (UETA) says that a 'contract may be formed by the interaction of electronic agents of the parties, even if no individual was aware of or reviewed the electronic agents' actions or the resulting terms and agreements.'
  73. Samson, Martin H., Phillips Nizer LLP (2007). "Internet Archive v. Suzanne Shell". internetlibrary.com. Internet Library of Law and Court Decisions. Archived from the original on August 3, 2014. Retrieved March 25, 2015. More importantly, held the court, Internet Archive's mere copying of Shell's site, and display thereof in its database, did not constitute the requisite exercise of dominion and control over defendant's property. Importantly, noted the court, the defendant at all times owned and operated her own site. Said the Court: 'Shell has failed to allege facts showing that Internet Archive exercised dominion or control over her website, since Shell's complaint states explicitly that she continued to own and operate the website while it was archived on the Wayback machine. Shell identifies no authority supporting the notion that copying documents is by itself enough of a deprivation of use to support conversion. Conversely, numerous circuits have determined that it is not.'
  74. brewster (April 25, 2007). "Internet Archive and Suzanne Shell Settle Lawsuit". archive.org. Denver, CO, USA: Internet Archive. Archived from the original on December 5, 2010. Retrieved March 25, 2015. Both parties sincerely regret any turmoil that the lawsuit may have caused for the other. Neither Internet Archive nor Ms Shell condones any conduct which may have caused harm to either party arising out of the public attention to this lawsuit. The parties have not engaged in such conduct and request that the public response to the amicable resolution of this litigation be consistent with their wishes that no further harm or turmoil be caused to either party.
  75. Stobbe, Richard (5 December 2014). "Copyright Implications Of A "Right To Be Forgotten"? Or How To Take-Down The Internet Archive". Mondaq. Retrieved 8 March 2019.
  76. McVeigh, Glennys (16 October 2014). Philpott, James; Weissman, Adam; Bucholz, Ren; Kettles, Brent; Pearl, Aaron, eds. "Davydiuk v. Internet Archive Canada, 2014 FC 944". CanLII . Federation of Law Societies of Canada . Retrieved 8 March 2019.
  77. Southcott, Richard F. (30 November 2016). Philpott, John; Alton, Alex; Bucholz, Ren, eds. "Davydiuk v. Internet Archive Canada and Internet Archive, 2016 FC 1313 (CanLII)". CanLII . Ottawa, Ontario: Federation of Law Societies of Canada . Retrieved 8 March 2019.
  78. Conger, Kate. "Backing up the history of the internet in Canada to save it from Trump". TechCrunch. Archived from the original on December 27, 2016. Retrieved May 14, 2017.
  79. "Where to find what's disappeared online, and a whole lot more: the Internet Archive". Public Radio International. Archived from the original on March 28, 2017. Retrieved May 14, 2017.
  80. Chirgwin, Richard. "There's no Wayback in Russia: Putin blocks Archive.org". Archived from the original on October 7, 2016. Retrieved May 14, 2017.
  81. "Russia won't go Wayback, blocks the Internet Archive". Digital Trends. June 26, 2015. Archived from the original on April 17, 2016. Retrieved May 14, 2017.
  82. "Help Us Keep the Archive Free, Accessible, and Reader Private | Internet Archive Blogs". Archived from the original on May 21, 2017. Retrieved May 14, 2017.
  83. "Internet Archive: Proposed Changes To DMCA Would Make Us "Censor The Web"". Consumerist. June 7, 2016. Archived from the original on November 11, 2016. Retrieved May 14, 2017.
  84. Herb, Ulrich. "Die Trump-Angst grassiert" (in German). heise online. Archived from the original on December 7, 2016. Retrieved May 14, 2017.
  85. LaFrance, Adrienne. "The Internet's Dark Ages". The Atlantic. Archived from the original on May 7, 2017. Retrieved May 14, 2017.
  86. "The Entire Internet Will Be Archived In Canada to Protect It From Trump". Motherboard. Archived from the original on May 16, 2017. Retrieved May 14, 2017.
  87. LaFrance, Adrienne. "The Human Fear of Total Knowledge". The Atlantic. Archived from the original on December 2, 2016. Retrieved May 14, 2017.