Web archiving

Last updated

Web archiving is the process of collecting portions of the World Wide Web to ensure the information is preserved in an archive for future researchers, historians, and the public. Web archivists typically employ web crawlers for automated capture due to the massive size and amount of information on the Web. The largest web archiving organization based on a bulk crawling approach is the Wayback Machine, which strives to maintain an archive of the entire Web.

Contents

The growing portion of human culture created and recorded on the web makes it inevitable that more and more libraries and archives will have to face the challenges of web archiving. [1] National libraries, national archives and various consortia of organizations are also involved in archiving culturally important Web content.

Commercial web archiving software and services are also available to organizations who need to archive their own web content for corporate heritage, regulatory, or legal purposes.

History and development

While curation and organization of the web has been prevalent since the mid- to late-1990s, one of the first large-scale web archiving project was the Internet Archive, a non-profit organization created by Brewster Kahle in 1996. [2] The Internet Archive released its own search engine for viewing archived web content, the Wayback Machine, in 2001. [2] As of 2018, the Internet Archive was home to 40 petabytes of data. [3] The Internet Archive also developed many of its own tools for collecting and storing its data, including PetaBox for storing the large amounts of data efficiently and safely, and Heritrix, a web crawler developed in conjunction with the Nordic national libraries. [2] Other projects launched around the same time included a web archiving project by the National Library of Canada, Australia's Pandora, Tasmanian web archives and Sweden's Kulturarw3. [4] [5]

From 2001 to 2010,[ failed verification ] the International Web Archiving Workshop (IWAW) provided a platform to share experiences and exchange ideas. [6] [7] The International Internet Preservation Consortium (IIPC), established in 2003, has facilitated international collaboration in developing standards and open source tools for the creation of web archives. [8]

The now-defunct Internet Memory Foundation was founded in 2004 and founded by the European Commission in order to archive the web in Europe. [2] This project developed and released many open source tools, such as "rich media capturing, temporal coherence analysis, spam assessment, and terminology evolution detection." [2] The data from the foundation is now housed by the Internet Archive, but not currently publicly accessible. [9]

Despite the fact that there is no centralized responsibility for its preservation, web content is rapidly becoming the official record. For example, in 2017, the United States Department of Justice affirmed that the government treats the President's tweets as official statements. [10]

Methods of collection

Web archivists generally archive various types of web content including HTML web pages, style sheets, JavaScript, images, and video. They also archive metadata about the collected resources such as access time, MIME type, and content length. This metadata is useful in establishing authenticity and provenance of the archived collection.

Remote harvesting

The most common web archiving technique uses web crawlers to automate the process of collecting web pages. [5] Web crawlers typically access web pages in the same manner that users with a browser see the Web, and therefore provide a comparatively simple method of remote harvesting web content. Examples of web crawlers used for web archiving include:

There exist various free services which may be used to archive web resources "on-demand", using web crawling techniques. These services include the Wayback Machine and WebCite.

Database archiving

Database archiving refers to methods for archiving the underlying content of database-driven websites. It typically requires the extraction of the database content into a standard schema, often using XML. Once stored in that standard format, the archived content of multiple databases can then be made available using a single access system. This approach is exemplified by the DeepArc [11] and Xinq [12] tools developed by the Bibliothèque Nationale de France and the National Library of Australia respectively. DeepArc enables the structure of a relational database to be mapped to an XML schema, and the content exported into an XML document. Xinq then allows that content to be delivered online. Although the original layout and behavior of the website cannot be preserved exactly, Xinq does allow the basic querying and retrieval functionality to be replicated.

Transactional archiving

Transactional archiving is an event-driven approach, which collects the actual transactions which take place between a web server and a web browser. It is primarily used as a means of preserving evidence of the content which was actually viewed on a particular website, on a given date. This may be particularly important for organizations which need to comply with legal or regulatory requirements for disclosing and retaining information. [13]

A transactional archiving system typically operates by intercepting every HTTP request to, and response from, the web server, filtering each response to eliminate duplicate content, and permanently storing the responses as bitstreams.

Difficulties and limitations

Crawlers

Web archives which rely on web crawling as their primary means of collecting the Web are influenced by the difficulties of web crawling:

However, it is important to note that a native format web archive, i.e., a fully browsable web archive, with working links, media, etc., is only really possible using crawler technology.

The Web is so large that crawling a significant portion of it takes a large number of technical resources. Also, the Web is changing so fast that portions of a website may suffer modifications before a crawler has even finished crawling it.

General limitations

Some web servers are configured to return different pages to web archiver requests than they would in response to regular browser requests. This is typically done to fool search engines into directing more user traffic to a website, and is often done to avoid accountability, or to provide enhanced content only to those browsers that can display it.

Not only must web archivists deal with the technical challenges of web archiving, they must also contend with intellectual property laws. Peter Lyman [14] states that "although the Web is popularly regarded as a public domain resource, it is copyrighted; thus, archivists have no legal right to copy the Web". However national libraries in some countries [15] have a legal right to copy portions of the web under an extension of a legal deposit.

Some private non-profit web archives that are made publicly accessible like WebCite, the Internet Archive or the Internet Memory Foundation allow content owners to hide or remove archived content that they do not want the public to have access to. Other web archives are only accessible from certain locations or have regulated usage. WebCite cites a recent lawsuit against Google's caching, which Google won. [16]

Laws

In 2017 the Financial Industry Regulatory Authority, Inc. (FINRA), a United States financial regulatory organization, released a notice stating all the business doing digital communications are required to keep a record. This includes website data, social media posts, and messages. [17] Some copyright laws may inhibit Web archiving. For instance, academic archiving by Sci-Hub falls outside the bounds of contemporary copyright law. The site provides enduring access to academic works including those that do not have an open access license and thereby contributes to the archival of scientific research which may otherwise be lost. [18] [19]

See also

Related Research Articles

<span class="mw-page-title-main">Web crawler</span> Software which systematically browses the World Wide Web

A Web crawler, sometimes called a spider or spiderbot and often shortened to crawler, is an Internet bot that systematically browses the World Wide Web and that is typically operated by search engines for the purpose of Web indexing.

robots.txt Internet protocol

robots.txt is the filename used for implementing the Robots Exclusion Protocol, a standard used by websites to indicate to visiting web crawlers and other web robots which portions of the website they are allowed to visit.

Distributed web crawling is a distributed computing technique whereby Internet search engines employ many computers to index the Internet via web crawling. Such systems may allow for users to voluntarily offer their own computing and bandwidth resources towards crawling web pages. By spreading the load of these tasks across many computers, costs that would otherwise be spent on maintaining large computing clusters are avoided.

<span class="mw-page-title-main">Googlebot</span> Web crawler used by Google

Googlebot is the web crawler software used by Google that collects documents from the web to build a searchable index for the Google Search engine. This name is actually used to refer to two different types of web crawlers: a desktop crawler and a mobile crawler.

<span class="mw-page-title-main">Link rot</span> Phenomenon of URLs tending to cease functioning

Link rot is the phenomenon of hyperlinks tending over time to cease to point to their originally targeted file, web page, or server due to that resource being relocated to a new address or becoming permanently unavailable. A link that no longer points to its target, often called a broken, dead, or orphaned link, is a specific form of dangling pointer.

Internet research is the practice of using Internet information, especially free information on the World Wide Web, or Internet-based resources in research.

The deep web, invisible web, or hidden web are parts of the World Wide Web whose contents are not indexed by standard web search-engine programs. This is in contrast to the "surface web", which is accessible to anyone using the Internet. Computer scientist Michael K. Bergman is credited with inventing the term in 2001 as a search-indexing term.

In web archiving, an archive site is a website that stores information on webpages from the past for anyone to view.

Web scraping, web harvesting, or web data extraction is data scraping used for extracting data from websites. Web scraping software may directly access the World Wide Web using the Hypertext Transfer Protocol or a web browser. While web scraping can be done manually by a software user, the term typically refers to automated processes implemented using a bot or web crawler. It is a form of copying in which specific data is gathered and copied from the web, typically into a central local database or spreadsheet, for later retrieval or analysis.

Sitemaps is a protocol in XML format meant for a webmaster to inform search engines about URLs on a website that are available for web crawling. It allows webmasters to include additional information about each URL: when it was last updated, how often it changes, and how important it is in relation to other URLs of the site. This allows search engines to crawl the site more efficiently and to find URLs that may be isolated from the rest of the site's content. The Sitemaps protocol is a URL inclusion protocol and complements robots.txt, a URL exclusion protocol.

<span class="mw-page-title-main">Search engine</span> Software system for finding relevant information on the Web

A search engine is a software system that provides hyperlinks to web pages and other relevant information on the Web in response to a user's query. The user inputs a query within a web browser or a mobile app, and the search results are often a list of hyperlinks, accompanied by textual summaries and images. Users also have the option of limiting the search to a specific type of results, such as images, videos, or news.

<span class="mw-page-title-main">Heritrix</span> Web crawler designed for web archiving

Heritrix is a web crawler designed for web archiving. It was written by the Internet Archive. It is available under a free software license and written in Java. The main interface is accessible using a web browser, and there is a command-line tool that can optionally be used to initiate crawls.

WebCite is an intermittently available archive site, originally designed to digitally preserve scientific and educationally important material on the web by taking snapshots of Internet contents as they existed at the time when a blogger or a scholar cited or quoted from it. The preservation service enabled verifiability of claims supported by the cited sources even when the original web pages are being revised, removed, or disappear for other reasons, an effect known as link rot.

A single-page application (SPA) is a web application or website that interacts with the user by dynamically rewriting the current web page with new data from the web server, instead of the default method of a web browser loading entire new pages. The goal is faster transitions that make the website feel more like a native app.

<span class="mw-page-title-main">Digital library</span> Online database of digital objects stored in electronic media formats and accessible via computers

A digital library is an online database of digital objects that can include text, still images, audio, video, digital documents, or other digital media formats or a library accessible through the internet. Objects can consist of digitized content like print or photographs, as well as originally produced digital content like word processor files or social media posts. In addition to storing content, digital libraries provide means for organizing, searching, and retrieving the content contained in the collection. Digital libraries can vary immensely in size and scope, and can be maintained by individuals or organizations. The digital content may be stored locally, or accessed remotely via computer networks. These information retrieval systems are able to exchange information with each other through interoperability and sustainability.

DeepPeep was a search engine that aimed to crawl and index every database on the public Web. Unlike traditional search engines, which crawl existing webpages and their hyperlinks, DeepPeep aimed to allow access to the so-called Deep web, World Wide Web content only available via for instance typed queries into databases. The project started at the University of Utah and was overseen by Juliana Freire, an associate professor at the university's School of Computing WebDB group. The goal was to make 90% of all WWW content accessible, according to Freire. The project ran a beta search engine and was sponsored by the University of Utah and a $243,000 grant from the National Science Foundation. It generated worldwide interest.

<span class="mw-page-title-main">Wayback Machine</span> Digital archive by the Internet Archive

The Wayback Machine is a digital archive of the World Wide Web founded by the Internet Archive, an American nonprofit organization based in San Francisco, California. Created in 1996 and launched to the public in 2001, it allows the user to go "back in time" to see how websites looked in the past. Its founders, Brewster Kahle and Bruce Gilliat, developed the Wayback Machine to provide "universal access to all knowledge" by preserving archived copies of defunct web pages.

Webarchiv is a digital archive of important Czech web resources, which are collected with the aim of their long-term preservation.

<span class="mw-page-title-main">International Internet Preservation Consortium</span> Organisation

The International Internet Preservation Consortium is an international organization of libraries and other organizations established to coordinate efforts to preserve internet content for the future. It was founded in July 2003 by 12 participating institutions, and had grown to 35 members by January 2010. As of January 2022, there are 52 members.

archive.today is a web archiving site, founded in 2012, that saves snapshots on demand, and has support for JavaScript-heavy sites such as Google Maps, and Twitter. archive.today records two snapshots: one replicates the original webpage including any functional live links; the other is a screenshot of the page.

References

Citations

  1. Truman, Gail (2016). "Web Archiving Environmental Scan". Harvard Library.
  2. 1 2 3 4 5 Toyoda, M.; Kitsuregawa, M. (May 2012). "The History of Web Archiving". Proceedings of the IEEE. 100 (Special Centennial Issue): 1441–1443. doi: 10.1109/JPROC.2012.2189920 . ISSN   0018-9219.
  3. "Inside Wayback Machine, the internet's time capsule". The Hustle. September 28, 2018. sec. Wayyyy back. Retrieved July 21, 2020.
  4. Costa, Miguel; Gomes, Daniel; Silva, Mário J. (September 2017). "The evolution of web archiving". International Journal on Digital Libraries. 18 (3): 191–205. doi:10.1007/s00799-016-0171-9. S2CID   24303455.
  5. 1 2 Consalvo, Mia; Ess, Charles, eds. (April 2011). "Web Archiving – Between Past, Present, and Future". The Handbook of Internet Studies (1 ed.). Wiley. pp. 24–42. doi:10.1002/9781444314861. ISBN   978-1-4051-8588-2.
  6. "IWAW 2010: The 10th Intl Web Archiving Workshop". www.wikicfp.com. Retrieved August 19, 2019.
  7. "IWAW - International Web Archiving Workshops". bibnum.bnf.fr. Archived from the original on November 20, 2012. Retrieved August 19, 2019.
  8. "About the IIPC". IIPC. Retrieved April 17, 2022.
  9. "Internet Memory Foundation : Free Web: Free Download, Borrow and Streaming". archive.org. Internet Archive. Retrieved July 21, 2020.
  10. Regis, Camille (June 4, 2019). "Web Archiving: Think the Web is Permanent? Think Again". History Associates. Retrieved July 14, 2019.
  11. "DeepArc". deeparc.sourceforge.net. Archived from the original on March 3, 2024.
  12. "Xinq [Xml INQuiry]". National Library of Australia. Archived from the original on February 27, 2011.
  13. Brown, Adrian (January 10, 2016). Archiving websites: a practical guide for information management professionals. Facet. ISBN   978-1-78330-053-2. OCLC   1064574312.
  14. Lyman (2002)
  15. "Legal Deposit | IIPC". netpreserve.org. Archived from the original on March 16, 2017. Retrieved January 31, 2017.
  16. "WebCite FAQ". Webcitation.org. Retrieved September 20, 2018.
  17. "Social Media and Digital Communications" (PDF). finra.org. FINRA.
  18. Claburn, Thomas (September 10, 2020). "Open access journals are vanishing from the web, Internet Archive stands ready to fill in the gaps". The Register .
  19. Laakso, Mikael; Matthias, Lisa; Jahn, Najko (2021). "Open is not forever: A study of vanished open access journals". Journal of the Association for Information Science and Technology. 72 (9): 1099–1112. arXiv: 2008.11933 . doi:10.1002/ASI.24460. S2CID   221340749.

General bibliography