CoDeeN

Last updated
CoDeeN
Developer(s) Princeton University
Initial release2003
Operating system Cross-platform (web-based application)
Type P2P Web cache
Website codeen.cs.princeton.edu

CoDeeN is a proxy server system created at Princeton University in 2003 and deployed for general use on PlanetLab. It operates as per the following:

  1. Users set their internet caches to a nearby high bandwidth proxy that participates in the system.
  2. Requests to that proxy are then forwarded to an appropriate member of the system that is in charge of the file (should be caching it) and that has sent recent updates showing that it is still alive. The file is forwarded to the proxy and thence to the client.

What this means for normal users is that if they use this and a server is slow, however the content is cached on the system, then (after the first upload) requests to that file will be fast. It also means that the request will not be satisfied by the original server, equivalent to free bandwidth.

For rare files this system could be slightly slower than downloading the file itself. The system's speed is also subject to the constraint of number of participating proxies.

For the case of large files requested by many peers, it uses a kind of 'multi-cast stream' from one peer to the others, which then distribute out to their respective proxies.

CoBlitz, a CDN technology firm (2006–2009), was a take-off of this, in that files are not saved in the web cache of a single member of the proxy-system, but are instead saved piece-wise across several members, and 'gathered up' when they are requested. This allows for more sharing of disk space among proxies, and for higher fault tolerance. To access this system, URLs were prefixed with http://coblitz.codeen.org/. [1] Verivue Inc. acquired CoBlitz in October 2010.

Related Research Articles

<span class="mw-page-title-main">Cache (computing)</span> Additional storage that enables faster access to main storage

In computing, a cache is a hardware or software component that stores data so that future requests for that data can be served faster; the data stored in a cache might be the result of an earlier computation or a copy of data stored elsewhere. A cache hit occurs when the requested data can be found in a cache, while a cache miss occurs when it cannot. Cache hits are served by reading data from the cache, which is faster than recomputing a result or reading from a slower data store; thus, the more requests that can be served from the cache, the faster the system performs.

<span class="mw-page-title-main">Freenet</span> Peer-to-peer Internet platform for censorship-resistant communication

Freenet is a peer-to-peer platform for censorship-resistant, anonymous communication. It uses a decentralized distributed data store to keep and deliver information, and has a suite of free software for publishing and communicating on the Web without fear of censorship. Both Freenet and some of its associated tools were originally designed by Ian Clarke, who defined Freenet's goal as providing freedom of speech on the Internet with strong anonymity protection.

Gnutella is a peer-to-peer network protocol. Founded in 2000, it was the first decentralized peer-to-peer network of its kind, leading to other, later networks adopting the model.

<span class="mw-page-title-main">Peer-to-peer</span> Type of decentralized and distributed network architecture

Peer-to-peer (P2P) computing or networking is a distributed application architecture that partitions tasks or workloads between peers. Peers are equally privileged, equipotent participants in the network. This forms a peer-to-peer network of nodes.

<span class="mw-page-title-main">Web server</span> Computer software that distributes web pages

A web server is computer software and underlying hardware that accepts requests via HTTP or its secure variant HTTPS. A user agent, commonly a web browser or web crawler, initiates communication by making a request for a web page or other resource using HTTP, and the server responds with the content of that resource or an error message. A web server can also accept and store resources sent from the user agent if configured to do so.

Uploading refers to transmitting data from one computer system to another through means of a network. Common methods of uploading include: uploading via web browsers, FTP clients], and terminals (SCP/SFTP). Uploading can be used in the context of clients that send files to a central server. While uploading can also be defined in the context of sending files between distributed clients, such as with a peer-to-peer (P2P) file-sharing protocol like BitTorrent, the term file sharing is more often used in this case. Moving files within a computer system, as opposed to over a network, is called file copying.

<span class="mw-page-title-main">Proxy server</span> Computer server that makes and receives requests on behalf of a user

In computer networking, a proxy server is a server application that acts as an intermediary between a client requesting a resource and the server providing that resource. It improves privacy, security, and performance in the process.

<span class="mw-page-title-main">Squid (software)</span> Caching and forwarding HTTP web proxy

Squid is a caching and forwarding HTTP proxy. It has a wide variety of uses, including speeding up a web server by caching repeated requests, caching World Wide Web (WWW), Domain Name System (DNS), and other lookups for a group of people sharing network resources, and aiding security by filtering traffic. Although used for mainly HTTP and File Transfer Protocol (FTP), Squid includes limited support for several other protocols including Internet Gopher, (SSL), Transport Layer Security (TLS), and Hypertext Transfer Protocol Secure (HTTPS). Squid does not support the SOCKS protocol, unlike Privoxy, with which Squid can be used in order to provide SOCKS support.

<span class="mw-page-title-main">Open proxy</span> Proxy server accessible to any Internet user

An open proxy is a type of proxy server that is accessible by any Internet user.

In computer science and networking in particular, a session is a time-delimited two-way link, a practical layer in the TCP/IP protocol enabling interactive expression and information exchange between two or more communication devices or ends – be they computers, automated systems, or live active users. A session is established at a certain point in time, and then ‘torn down’ - brought to an end - at some later point. An established communication session may involve more than one message in each direction. A session is typically stateful, meaning that at least one of the communicating parties needs to hold current state information and save information about the session history to be able to communicate, as opposed to stateless communication, where the communication consists of independent requests with responses.

<span class="mw-page-title-main">Content delivery network</span> Layer in the internet ecosystem addressing bottlenecks

A content delivery network, or content distribution network (CDN), is a geographically distributed network of proxy servers and their data centers. The goal is to provide high availability and performance by distributing the service spatially relative to end users. CDNs came into existence in the late 1990s as a means for alleviating the performance bottlenecks of the Internet as the Internet was starting to become a mission-critical medium for people and enterprises. Since then, CDNs have grown to serve a large portion of the Internet content today, including web objects, downloadable objects, applications, live streaming media, on-demand streaming media, and social media sites.

Network transparency, in its most general sense, refers to the ability of a protocol to transmit data over the network in a manner which is not observable to those using the applications that are using the protocol. In this way, users of a particular application may access remote resources in the same manner in which they would access their own local resources. An example of this is cloud storage, where remote files are presented as being locally accessible, and cloud computing where the resource in question is processing.

WAN optimization is a collection of techniques for improving data transfer across wide area networks (WANs). In 2008, the WAN optimization market was estimated to be $1 billion, and was to grow to $4.4 billion by 2014 according to Gartner, a technology research firm. In 2015 Gartner estimated the WAN optimization market to be a $1.1 billion market.

In computing, Microsoft's Windows Vista and Windows Server 2008 introduced in 2007/2008 a new networking stack named Next Generation TCP/IP stack, to improve on the previous stack in several ways. The stack includes native implementation of IPv6, as well as a complete overhaul of IPv4. The new TCP/IP stack uses a new method to store configuration settings that enables more dynamic control and does not require a computer restart after a change in settings. The new stack, implemented as a dual-stack model, depends on a strong host-model and features an infrastructure to enable more modular components that one can dynamically insert and remove.

libtorrent

libtorrent is an open-source implementation of the BitTorrent protocol. It is written in and has its main library interface in C++. Its most notable features are support for Mainline DHT, IPv6, HTTP seeds and μTorrent's peer exchange. libtorrent uses Boost, specifically Boost.Asio to gain its platform independence. It is known to build on Windows and most Unix-like operating systems.

Web performance refers to the speed in which web pages are downloaded and displayed on the user's web browser. Web performance optimization (WPO), or website optimization is the field of knowledge about increasing web performance.

Internet censorship circumvention, also referred to as going over the wall or scientific browsing in China, is the use of various methods and tools to bypass internet censorship.

Elliptics is a distributed key–value data storage with open source code. By default it is a classic distributed hash table (DHT) with multiple replicas put in different groups. Elliptics was created to meet requirements of multi-datacenter and physically distributed storage locations when storing huge amount of medium and large files.

Charles Web Debugging Proxy is a cross-platform HTTP debugging proxy server application written in Java. It enables the user to view HTTP, HTTPS, HTTP/2 and enabled TCP port traffic accessed from, to, or via the local computer. This includes requests and responses including HTTP headers and metadata with functionality targeted at assisting developers analyze connections and messaging.

References