Type of site
|Launched||October 24, 2001|
|Written in||Java, Python|
The Wayback Machine is a digital archive of the World Wide Web and other information on the Internet. It was launched in 2001 by the Internet Archive, a nonprofit organization based in San Francisco, California, United States.
An archive is an accumulation of historical records or the physical place they are located. Archives contain primary source documents that have accumulated over the course of an individual or organization's lifetime, and are kept to show the function of that person or organization. Professional archivists and historians generally understand archives to be records that have been naturally and necessarily generated as a product of regular legal, commercial, administrative, or social activities. They have been metaphorically defined as "the secretions of an organism", and are distinguished from documents that have been consciously written or created to communicate a particular message to posterity.
The World Wide Web (WWW), commonly known as the Web, is an information system where documents and other web resources are identified by Uniform Resource Locators, which may be interlinked by hypertext, and are accessible over the Internet. The resources of the WWW may be accessed by users by a software application called a web browser.
The Internet is the global system of interconnected computer networks that use the Internet protocol suite (TCP/IP) to link devices worldwide. It is a network of networks that consists of private, public, academic, business, and government networks of local to global scope, linked by a broad array of electronic, wireless, and optical networking technologies. The Internet carries a vast range of information resources and services, such as the inter-linked hypertext documents and applications of the World Wide Web (WWW), electronic mail, telephony, and file sharing.
Internet Archive founders Brewster Kahle and Bruce Gilliat launched the Wayback Machine in 2001 to address the problem of website content vanishing whenever it gets changed or shut down.The service enables users to see archived versions of web pages across time, which the archive calls a "three dimensional index". Kahle and Gilliat created the machine hoping to archive the entire Internet and provide "universal access to all knowledge."
The Internet Archive is a San Francisco-based nonprofit digital library with the stated mission of "universal access to all knowledge." It provides free public access to collections of digitized materials, including websites, software applications/games, music, movies/videos, moving images, and millions of public-domain books. In addition to its archiving function, the Archive is an activist organization, advocating for a free and open Internet.
Brewster Kahle is an American computer engineer, Internet entrepreneur, internet activist, advocate of universal access to all knowledge, and digital librarian. Kahle founded the Internet Archive and Alexa. In 2012 he was inducted into the Internet Hall of Fame.
Bruce Gilliat is co-founder and former chief executive officer of Alexa Internet.
The name Wayback Machine was chosen as a reference to the "WABAC machine" (pronounced way-back), a time-traveling device used by the characters Mr. Peabody and Sherman in The Rocky and Bullwinkle Show , an animated cartoon.In one of the animated cartoon's component segments, Peabody's Improbable History , the characters routinely used the machine to witness, participate in, and, more often than not, alter famous events in history.
The WABAC Machine or Wayback Machine is a fictional time machine from the segment "Peabody's Improbable History", a recurring feature of the 1960s cartoon series The Rocky and Bullwinkle Show. The WABAC Machine is a plot device used to transport the characters Mr. Peabody and Sherman back in time to visit important events in human history.
The Wayback Machine began archiving cached web pages in 1996, with the goal of making the service public five years later.From 1996 to 2001, the information was kept on digital tape, with Kahle occasionally allowing researchers and scientists to tap into the clunky database. When the archive reached its fifth anniversary in 2001, it was unveiled and opened to the public in a ceremony at the University of California, Berkeley. By the time the Wayback Machine launched, it already contained over 10 billion archived pages.
In computing, a cache is a hardware or software component that stores data so that future requests for that data can be served faster; the data stored in a cache might be the result of an earlier computation or a copy of data stored elsewhere. A cache hit occurs when the requested data can be found in a cache, while a cache miss occurs when it cannot. Cache hits are served by reading data from the cache, which is faster than recomputing a result or reading from a slower data store; thus, the more requests that can be served from the cache, the faster the system performs.
The University of California, Berkeley is a public research university in Berkeley, California. It was founded in 1868 and serves as the flagship campus of the ten campuses of the University of California. Berkeley has since grown to instruct over 40,000 students in approximately 350 undergraduate and graduate degree programs covering numerous disciplines.
Today, the data is stored on the Internet Archive's large cluster of Linux nodes.It revisits and archives new versions of websites on occasion (see technical details below). Sites can also be captured manually by entering a website's URL into the search box, provided that the website allows the Wayback Machine to "crawl" it and save the data.
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution.
A Uniform Resource Locator (URL), colloquially termed a web address, is a reference to a web resource that specifies its location on a computer network and a mechanism for retrieving it. A URL is a specific type of Uniform Resource Identifier (URI), although many people use the two terms interchangeably. Thus
http://www.example.com is a URL, while
www.example.com is not.</ref> URLs occur most commonly to reference web pages (http), but are also used for file transfer (ftp), email (mailto), database access (JDBC), and many other applications.
Software has been developed to "crawl" the web and download all publicly accessible World Wide Web pages, the Gopher hierarchy, the Netnews (Usenet) bulletin board system, and downloadable software.The information collected by these "crawlers" does not include all the information available on the Internet, since much of the data is restricted by the publisher or stored in databases that are not accessible. To overcome inconsistencies in partially cached websites, Archive-It.org was developed in 2005 by the Internet Archive as a means of allowing institutions and content creators to voluntarily harvest and preserve collections of digital content, and create digital archives.
A Web crawler, sometimes called a spider or spiderbot and often shortened to crawler, is an Internet bot that systematically browses the World Wide Web, typically for the purpose of Web indexing.
The Gopher protocol is a TCP/IP application layer protocol designed for distributing, searching, and retrieving documents over the Internet. The Gopher protocol was strongly oriented towards a menu-document design and presented an alternative to the World Wide Web in its early stages, but ultimately Hypertext Transfer Protocol (HTTP) became the dominant protocol. The Gopher ecosystem is often regarded as the effective predecessor of the World Wide Web.
Usenet is a worldwide distributed discussion system available on computers. It was developed from the general-purpose Unix-to-Unix Copy (UUCP) dial-up network architecture. Tom Truscott and Jim Ellis conceived the idea in 1979, and it was established in 1980. Users read and post messages to one or more categories, known as newsgroups. Usenet resembles a bulletin board system (BBS) in many respects and is the precursor to Internet forums that are widely used today. Discussions are threaded, as with web forums and BBSs, though posts are stored on the server sequentially. The name comes from the term "users network".
Crawls are contributed from various sources, some imported from third parties and others generated internally by the Archive.For example, crawls are contributed by the Sloan Foundation and Alexa, crawls run by IA on behalf of NARA and the Internet Memory Foundation, mirrors of Common Crawl. The "Worldwide Web Crawls" have been running since 2010 and capture the global Web.
The frequency of snapshot captures varies per website.Websites in the "Worldwide Web Crawls" are included in a "crawl list", with the site archived once per crawl. A crawl can take months or even years to complete depending on size. For example, "Wide Crawl Number 13" started on January 9, 2015, and completed on July 11, 2016. However, there may be multiple crawls ongoing at any one time, and a site might be included in more than one crawl list, so how often a site is crawled varies widely.
As technology has developed over the years, the storage capacity of the Wayback Machine has grown. In 2003, after only two years of public access, the Wayback Machine was growing at a rate of 12 terabytes/month. The data is stored on PetaBox rack systems custom designed by Internet Archive staff. The first 100TB rack became fully operational in June 2004, although it soon became clear that they would need much more storage than that.
The Internet Archive migrated its customized storage architecture to Sun Open Storage in 2009, and hosts a new data center in a Sun Modular Datacenter on Sun Microsystems' California campus. As of 2009 [update] , the Wayback Machine contained approximately three petabytes of data and was growing at a rate of 100 terabytes each month.
A new, improved version of the Wayback Machine, with an updated interface and a fresher index of archived content, was made available for public testing in 2011.In March that year, it was said on the Wayback Machine forum that "the Beta of the new Wayback Machine has a more complete and up-to-date index of all crawled materials into 2010, and will continue to be updated regularly. The index driving the classic Wayback Machine only has a little bit of material past 2008, and no further index updates are planned, as it will be phased out this year." Also in 2011, the Internet Archive installed their sixth pair of PetaBox racks which increased the Wayback Machine's storage capacity by 700 terabytes.
In January 2013, the company announced a ground-breaking milestone of 240 billion URLs.In October 2013, the company announced the "Save a Page" feature which allows any Internet user to archive the contents of a URL. This became a threat of abuse by the service for hosting malicious binaries.
As of December 2014 [update] , the Wayback Machine contained 435 billion web pages—almost nine petabytes of data, and was growing at about 20 terabytes a week.
As of July 2016 [update] , the Wayback Machine reportedly contained around 15 petabytes of data.
As of September 2018, the Wayback Machine contained more than 25 petabytes of data.
Between October 2013 and March 2015, the website's global Alexa rank changed from 163to 208. In March 2019 the rank was at 244.
|Wayback Machine by Year||Pages Archived (billion)|
Historically, Wayback Machine has respected the robots exclusion standard (robots.txt) in determining if a website would be crawled or not; or if already crawled, if its archives would be publicly viewable. Website owners had the option to opt-out of Wayback Machine through the use of robots.txt. It applied robots.txt rules retroactively; if a site blocked the Internet Archive, any previously archived pages from the domain were immediately rendered unavailable as well. In addition, the Internet Archive stated that "Sometimes a website owner will contact us directly and ask us to stop crawling or archiving a site. We comply with these requests."In addition, the website says: "The Internet Archive is not interested in preserving or offering access to Web sites or other Internet documents of persons who do not want their materials in the collection."
In April 17, 2017, reports surfaced of sites that had gone defunct and became parked domains were using robots.txt to exclude themselves from search engines, resulting in them being inadvertently excluded from the Wayback Machine. [ citation needed ]The Internet archive changed the policy to now require an explicit exclusion request to remove it from the Wayback Machine.
Wayback's retroactive exclusion policy is based in part upon Recommendations for Managing Removal Requests and Preserving Archival Integrity published by the School of Information Management and Systems at University of California, Berkeley in 2002, which gives a website owner the right to block access to the site's archives.Wayback has complied with this policy to help avoid expensive litigation.
The Wayback retroactive exclusion policy began to relax in 2017, when it stopped honoring robots.txt on U.S. government and military web sites for both crawling and displaying web pages. As of April 2017, Wayback is ignoring robots.txt more broadly, not just for U.S. government websites.
From its public launch in 2001, the Wayback Machine has been studied by scholars both for the ways it stores and collects data as well as for the actual pages contained in its archive. As of 2013, scholars had written about 350 articles on the Wayback Machine, mostly from the information technology, library science, and social science fields. Social science scholars have used the Wayback Machine to analyze how the development of websites from the mid-1990s to the present has affected the company's growth.
When the Wayback Machine archives a page, it usually includes most of the hyperlinks, keeping those links active when they just as easily could have been broken by the Internet's instability. Researchers in India studied the effectiveness of the Wayback Machine's ability to save hyperlinks in online scholarly publications and found that it saved slightly more than half of them.
Journalists use the Wayback Machine to view dead websites, dated news reports, and changes to website contents. Its content has been used to hold politicians accountable and expose battlefield lies.In 2014, an archived social media page of Igor Girkin, a separatist rebel leader in Ukraine, showed him boasting about his troops having shot down a suspected Ukrainian military airplane before it became known that the plane actually was a civilian Malaysian Airlines jet, after which he deleted the post and blamed Ukraine's military for downing the plane. In 2017, the March for Science originated from a discussion on reddit that indicated someone had visited Archive.org and discovered that all references to climate change had been deleted from the White House website. In response, a user commented, "There needs to be a Scientists' March on Washington".
Furthermore, the site is used heavily for verification, providing access to references and content creation by Wikipedia editors.[ citation needed ]
In 2014 there was a six-month lag time between when a website was crawled and when it became available for viewing in the Wayback Machine.Currently, the lag time is 3 to 10 hours. The Wayback Machine offers only limited search facilities. Its "Site Search" feature allows users to find a site based on words describing the site, rather than words found on the web pages themselves.
In a 2009 case, Netbula, LLC v. Chordiant Software Inc., defendant Chordiant filed a motion to compel Netbula to disable the robots.txt file on its website that was causing the Wayback Machine to retroactively remove access to previous versions of pages it had archived from Netbula's site, pages that Chordiant believed would support its case.
Netbula objected to the motion on the ground that defendants were asking to alter Netbula's website and that they should have subpoenaed Internet Archive for the pages directly.An employee of Internet Archive filed a sworn statement supporting Chordiant's motion, however, stating that it could not produce the web pages by any other means "without considerable burden, expense and disruption to its operations."
Magistrate Judge Howard Lloyd in the Northern District of California, San Jose Division, rejected Netbula's arguments and ordered them to disable the robots.txt blockage temporarily in order to allow Chordiant to retrieve the archived pages that they sought.
In an October 2004 case, Telewizja Polska USA, Inc. v. Echostar Satellite, No. 02 C 3293, 65 Fed. R. Evid. Serv. 673 (N.D. Ill. October 15, 2004), a litigant attempted to use the Wayback Machine archives as a source of admissible evidence, perhaps for the first time. Telewizja Polska is the provider of TVP Polonia and EchoStar operates the Dish Network. Prior to the trial proceedings, EchoStar indicated that it intended to offer Wayback Machine snapshots as proof of the past content of Telewizja Polska's website. Telewizja Polska brought a motion in limine to suppress the snapshots on the grounds of hearsay and unauthenticated source, but Magistrate Judge Arlander Keys rejected Telewizja Polska's assertion of hearsay and denied TVP's motion in limine to exclude the evidence at trial. [ citation needed ] and held that neither the affidavit of the Internet Archive employee nor the underlying pages (i.e., the Telewizja Polska website) were admissible as evidence. Judge Guzman reasoned that the employee's affidavit contained both hearsay and inconclusive supporting statements, and the purported web page, printouts were not self-authenticating.[ citation needed ]At the trial, however, District Court Judge Ronald Guzman, the trial judge, overruled Magistrate Keys' findings,
Provided some additional requirements are met (e.g., providing an authoritative statement of the archivist), the United States patent office and the European Patent Office will accept date stamps from the Internet Archive as evidence of when a given Web page was accessible to the public. These dates are used to determine if a Web page is available as prior art for instance in examining a patent application.
There are technical limitations to archiving a website, and as a consequence, it is possible for opposing parties in litigation to misuse the results provided by website archives. This problem can be exacerbated by the practice of submitting screenshots of web pages in complaints, answers, or expert witness reports when the underlying links are not exposed and therefore, can contain errors. For example, archives such as the Wayback Machine do not fill out forms and therefore, do not include the contents of non-RESTful e-commerce databases in their archives.
In Europe, the Wayback Machine could be interpreted as violating copyright laws. Only the content creator can decide where their content is published or duplicated, so the Archive would have to delete pages from its system upon request of the creator.The exclusion policies for the Wayback Machine may be found in the FAQ section of the site.
A number of cases have been brought against the Internet Archive specifically for its Wayback Machine archiving efforts.
In late 2002, the Internet Archive removed various sites that were critical of Scientology from the Wayback Machine.An error message stated that this was in response to a "request by the site owner". Later, it was clarified that lawyers from the Church of Scientology had demanded the removal and that the site owners did not want their material removed.
In 2003, Harding Earley Follmer & Frailey defended a client from a trademark dispute using the Archive's Wayback Machine. The attorneys were able to demonstrate that the claims made by the plaintiff were invalid, based on the content of their website from several years prior. The plaintiff, Healthcare Advocates, then amended their complaint to include the Internet Archive, accusing the organization of copyright infringement as well as violations of the DMCA and the Computer Fraud and Abuse Act. Healthcare Advocates claimed that, since they had installed a robots.txt file on their website, even if after the initial lawsuit was filed, the Archive should have removed all previous copies of the plaintiff website from the Wayback Machine, however, some material continued to be publicly visible on Wayback.The lawsuit was settled out of court, after Wayback fixed the problem.
Activist Suzanne Shell filed suit in December 2005, demanding Internet Archive pay her US$100,000 for archiving her website profane-justice.org between 1999 and 2004.Internet Archive filed a declaratory judgment action in the United States District Court for the Northern District of California on January 20, 2006, seeking a judicial determination that Internet Archive did not violate Shell's copyright. Shell responded and brought a countersuit against Internet Archive for archiving her site, which she alleges is in violation of her terms of service. On February 13, 2007, a judge for the United States District Court for the District of Colorado dismissed all counterclaims except breach of contract. The Internet Archive did not move to dismiss copyright infringement claims Shell asserted arising out of its copying activities, which would also go forward.
On April 25, 2007, Internet Archive and Suzanne Shell jointly announced the settlement of their lawsuit.The Internet Archive said it "...has no interest in including materials in the Wayback Machine of persons who do not wish to have their Web content archived. We recognize that Ms Shell has a valid and enforceable copyright in her Web site and we regret that the inclusion of her Web site in the Wayback Machine resulted in this litigation." Shell said, "I respect the historical value of Internet Archive's goal. I never intended to interfere with that goal nor cause it any harm."
Between 2013 and 2016, a pornographic actor named Daniel Davydiuk tried to remove archived images of himself from the Wayback Machine's archive, first by sending multiple DMCA requests to the archive, and then by appealing to the Federal Court of Canada.
Archive.org is currently blocked in China. [ needs update? ]After the site enabled the encrypted HTTPS protocol, the Internet Archive was blocked in its entirety in Russia in 2015.
Alison Macrina, director of the Library Freedom Project, notes that "while librarians deeply value individual privacy, we also strongly oppose censorship".
There are known rare cases where online access to content which "for nothing" has put people in danger was disabled by the website.
Other threats include natural disasters, [ citation needed ] manipulation of the archive's contents (see also: cyberattack, backup), problematic copyright laws and surveillance of the site's users.destruction (remote or physical),
Kevin Vaughan suspects that in the long-term of multiple generations "next to nothing" will survive in a useful way besides "if we have continuity in our technological civilization" by which "a lot of the bare data will remain findable and searchable".
The Atlantic has reported that the Internet Archive, which describes itself to be built for the long-term,is working furiously to capture data before it disappears without any long-term infrastructure to speak of.
The robots exclusion standard, also known as the robots exclusion protocol or simply robots.txt, is a standard used by websites to communicate with web crawlers and other web robots. The standard specifies how to inform the web robot about which areas of the website should not be processed or scanned. Robots are often used by search engines to categorize websites. Not all robots cooperate with the standard; email harvesters, spambots, malware and robots that scan for security vulnerabilities may even start with the portions of the website where they have been told to stay out. The standard can be used in conjunction with Sitemaps, a robot inclusion standard for websites.
Search engine optimization (SEO) is the process of increasing the quality and quantity of website traffic by increasing the visibility of a website or a web page to users of a web search engine.
In the context of the World Wide Web, deep linking is the use of a hyperlink that links to a specific, generally searchable or indexed, piece of web content on a website, rather than the website's home page. The URL contains all the information needed to point to a particular item, in this case the "Example" section of the English Wikipedia article entitled "Deep linking", as opposed to only the information needed to point to the highest-level home page of Wikipedia at
goatse.cx, often referred to simply as "Goatse", was originally an Internet shock site. Its front page featured a picture, entitled hello.jpg, showing a naked man widely stretching his anus with both hands.
Alexa Internet, Inc. is an American web traffic analysis company based in San Francisco. It is a wholly owned subsidiary of Amazon.
Apache Nutch is a highly extensible and scalable open source web crawler software project.
The deep web, invisible web, or hidden web are parts of the World Wide Web whose contents are not indexed by standard web search engines. The opposite term to the deep web is the surface web, which is accessible to anyone using the Internet. Computer scientist Michael K. Bergman is credited with coining the term deep web in 2001 as a search indexing term.
The noindex value of an HTML robots meta tag requests that automated Internet bots avoid indexing a web page. Reasons why one might want to use this meta tag include advising robots not to index a very large database, web pages that are very transitory, web pages that are under development, web pages that one wishes to keep slightly more private, or the printer and mobile-friendly versions of pages. Since the burden of honoring a website's noindex tag lies with the author of the search robot, sometimes these tags are ignored. Also the interpretation of the noindex tag is sometimes slightly different from one search engine company to the next.
The Sitemaps protocol allows a webmaster to inform search engines about URLs on a website that are available for crawling. A Sitemap is an XML file that lists the URLs for a site. It allows webmasters to include additional information about each URL: when it was last updated, how often it changes, and how important it is in relation to other URLs in the site. This allows search engines to crawl the site more efficiently and to find URLs that may be isolated from rest of the site's content. The sitemaps protocol is a URL inclusion protocol and complements
robots.txt, a URL exclusion protocol.
A web search engine or Internet search engine is a software system that is designed to carry out web search, which means to search the World Wide Web in a systematic way for particular information specified in a textual web search query. The search results are generally presented in a line of results, often referred to as search engine results pages (SERPs). The information may be a mix of links to web pages, images, videos, infographics, articles, research papers, and other types of files. Some search engines also mine data available in databases or open directories. Unlike web directories, which are maintained only by human editors, search engines also maintain real-time information by running an algorithm on a web crawler. Internet content that is not capable of being searched by a web search engine is generally described as the deep web.
Web archiving is the process of collecting portions of the World Wide Web to ensure the information is preserved in an archive for future researchers, historians, and the public. Web archivists typically employ web crawlers for automated capture due to the massive size and amount of information on the Web. The largest web archiving organization based on a bulk crawling approach is the Internet Archive, which strives to maintain an archive of the entire Web.
WebCite is an on-demand archiving service, designed to digitally preserve scientific and educationally important material on the web by making snapshots of Internet contents as they existed at the time when a blogger, or a scholar or a Wikipedia editor cited or quoted from it. The preservation service enables verifiability of claims supported by the cited sources even when the original web pages are being revised, removed, or disappear for other reasons, an effect known as link rot.
BotSeer was a Web-based information system and search tool used for research on Web robots and trends in Robot Exclusion Protocol deployment and adherence. It was created and designed by Yang Sun, Isaac G. Councill, Ziming Zhuang and C. Lee Giles. BotSeer is now inactive; the original URL was https://web.archive.org/web/20100208214818/http://botseer.ist.psu.edu/
Deletionpedia is an online archive wiki containing articles deleted from the English Wikipedia. Its version of each article includes a header with more information about the deletion such as whether a speedy deletion occurred, where the deletion discussion about the article can be found and which editor deleted the article. The original Deletionpedia operated from February to September 2008. The site was restarted under new management in December 2013.
Blekko, trademarked as blekko (lowercase), was a company that provided a web search engine with the stated goal of providing better search results than those offered by Google Search, with results gathered from a set of 3 billion trusted webpages and excluding such sites as content farms. The company's site, launched to the public on November 1, 2010, used slashtags to provide results for common searches. Blekko also offered a downloadable search bar. It was acquired by IBM in March 2015, and the service was discontinued.
We have added the ability to archive a page instantly and get back a permanent URL for that page in the Wayback Machine. This service allows anyone – wikipedia editors, scholars, legal professionals, students, or home cooks like me – to create a stable URL to cite, share or bookmark any information they want to still have access to in the future.
2015-03-25: Latest URLs hosted in this IP address detected by at least one URL scanner or malicious URL dataset. ... 2/62 2015-03-25 16:14:12 [complete URL redacted]/Renegotiating_TLS.pdf ... 1/62 2015-03-25 04:46:34 [complete URL redacted]/CBLightSetup.exe
2015-03-25: Part of this site was listed for suspicious activity 138 time(s) over the past 90 days. ... What happened when Google visited this site? ... Of the 42410 pages we tested on the site over the past 90 days, 450 page(s) resulted in malicious software being downloaded and installed without user consent. The last time Google visited this site was on 2015-03-25, and the last time suspicious content was found on this site was on 2015-03-25. ... Malicious software includes 169 trojan(s), 126 virus, 43 backdoor(s).
1) Internet Archive's motion to dismiss Shell's counterclaim for conversion and civil theft (Second Cause of Action) is GRANTED, 2) Internet Archive's motion to dismiss Shell's counterclaim for breach of contract (Third Cause of Action) is DENIED; 3) Internet Archive's motion to dismiss Shell's counterclaim for Racketeering under RICO and COCCA (Fourth Cause of Action) is GRANTED.
Computers can enter into contracts on behalf of people. The Uniform Electronic Transactions Act (UETA) says that a 'contract may be formed by the interaction of electronic agents of the parties, even if no individual was aware of or reviewed the electronic agents' actions or the resulting terms and agreements.'
More importantly, held the court, Internet Archive's mere copying of Shell's site, and display thereof in its database, did not constitute the requisite exercise of dominion and control over defendant's property. Importantly, noted the court, the defendant at all times owned and operated her own site. Said the Court: 'Shell has failed to allege facts showing that Internet Archive exercised dominion or control over her website, since Shell's complaint states explicitly that she continued to own and operate the website while it was archived on the Wayback machine. Shell identifies no authority supporting the notion that copying documents is by itself enough of a deprivation of use to support conversion. Conversely, numerous circuits have determined that it is not.'
Both parties sincerely regret any turmoil that the lawsuit may have caused for the other. Neither Internet Archive nor Ms Shell condones any conduct which may have caused harm to either party arising out of the public attention to this lawsuit. The parties have not engaged in such conduct and request that the public response to the amicable resolution of this litigation be consistent with their wishes that no further harm or turmoil be caused to either party.