Dynamic site acceleration

Last updated

Dynamic Site Acceleration (DSA) is a group of technologies which make the delivery of dynamic websites more efficient. [1] Manufacturers of application delivery controllers and content delivery networks (CDNs) use a host of techniques to accelerate dynamic sites, including:

Contents

Techniques

TCP multiplexing

An edge device, either an ADC or a CDN, is capable of TCP multiplexing which can be placed between web servers and clients to offload origin servers and accelerate content delivery.

Usually, each connection between client and server requires a dedicated process that lives on the origin for the duration of the connection. When clients have a slow connection, this occupies part of the origin server because the process has to stay alive while the server is waiting for a complete request. With TCP multiplexing, the situation is different. The device obtains a complete and valid request from the client before sending this to the origin when the request has fully arrived. This offloads application and database servers, which are slower and more expensive to use compared to ADCs or CDNs. [2]

Dynamic cache control

HTTP has a built-in system for cache control, using headers such as ETag, "expires" and "last modified". Many CDNs and ADCs that claim to have DSA, have replaced this with their system, calling it dynamic caching or dynamic cache control. It gives them more options to invalidate and bypass the cache over the standard HTTP cache control. [3]

The purpose of dynamic cache control is to increase the cache-hit ratio of a website, which is the rate between requests served by the cache and those served by the normal server. [4]

Due to the dynamic nature of web 2.0 websites, it is difficult to use static web caching. The reason is that dynamic sites, per definition, have personalized content for different users and regions. For example, mobile users may see different content from what desktop users may see, and registered users may need to see different content from what anonymous users see. Even among registered users, content may vary widely, often example being social media websites.

Static caching of dynamic user-specific pages introduces a potential risk of serving irrelevant content or 3rd party's content to the wrong users, if the identifier allowing the caching system to differentiate content, the URL/GET-request, isn't correctly varied by appending user-specific tokens/keys to it.

Dynamic cache control has more options to configure caching, such as cookie-based cache control, that allows serving content from cache based on the presence or lack of specific cookies. A cookie stores the unique identifier-key of a logged-in user on their device and it's already implemented for authenticating users upon execution of any page that opens a session, in a dynamic caching system, the caches are referred to by URL as well as the cookie keys, allowing to simply enable serving of default caches to anonymous users and personalized caches to logged-in users (without forcing you to modify the code, to make it append additional user identifiers to the URL, like in a static caching system).

Prefetching

If personalized content cannot be cached, it might be queued on an edge device. This means that the system will store a list of possible responses that might needed in the future, allowing them to be readily served. This differs from caching as prefetched responses are only served once, being especially useful for accelerating responses of third-party APIs, such as advertisements. [5]

Route Optimization

Route optimization, also known as "latency-based routing", optimizes the route of traffic between clients and the different origin servers in order to minimize latency. Route optimization can be done by a DNS provider [6] or by a CDN. [7]

Route optimization comes down to measuring multiple paths between the client and origin server, and then recording the fastest path between them. This path is then used to serve content when a client in a specific geographical zone makes a request. [8]

Relationship with Front-end Optimization

Although Front-end Optimization (FEO) and DSA both describe a group of techniques to improve online content delivery, they work over different aspects. There are overlaps, such as on-the-fly data compression and improved cache-control, however, the key differences are:

Related Research Articles

<span class="mw-page-title-main">Cache (computing)</span> Additional storage that enables faster access to main storage

In computing, a cache is a hardware or software component that stores data so that future requests for that data can be served faster; the data stored in a cache might be the result of an earlier computation or a copy of data stored elsewhere. A cache hit occurs when the requested data can be found in a cache, while a cache miss occurs when it cannot. Cache hits are served by reading data from the cache, which is faster than recomputing a result or reading from a slower data store; thus, the more requests that can be served from the cache, the faster the system performs.

<span class="mw-page-title-main">HTTP</span> Application protocol for distributed, collaborative, hypermedia information systems

HTTP is an application layer protocol in the Internet protocol suite model for distributed, collaborative, hypermedia information systems. HTTP is the foundation of data communication for the World Wide Web, where hypertext documents include hyperlinks to other resources that the user can easily access, for example by a mouse click or by tapping the screen in a web browser.

<span class="mw-page-title-main">Load balancing (computing)</span> Set of techniques to improve the distribution of workloads across multiple computing resources

In computing, load balancing is the process of distributing a set of tasks over a set of resources, with the aim of making their overall processing more efficient. Load balancing can optimize response time and avoid unevenly overloading some compute nodes while other compute nodes are left idle.

<span class="mw-page-title-main">Proxy server</span> Computer server that makes and receives requests on behalf of a user

In computer networking, a proxy server is a server application that acts as an intermediary between a client requesting a resource and the server providing that resource. It improves privacy, security, and possibly performance in the process.

A multilayer switch (MLS) is a computer networking device that switches on OSI layer 2 like an ordinary network switch and provides extra functions on higher OSI layers. The MLS was invented by engineers at Digital Equipment Corporation.

A Web cache is a system for optimizing the World Wide Web. It is implemented both client-side and server-side. The caching of multimedia and other files can result in less overall delay when browsing the Web.

<span class="mw-page-title-main">Content delivery network</span> Layer in the internet ecosystem addressing bottlenecks

A content delivery network or content distribution network (CDN) is a geographically distributed network of proxy servers and their data centers. The goal is to provide high availability and performance ("speed") by distributing the service spatially relative to end users. CDNs came into existence in the late 1990s as a means for alleviating the performance bottlenecks of the Internet as the Internet was starting to become a mission-critical medium for people and enterprises. Since then, CDNs have grown to serve a large portion of the Internet content today, including web objects, downloadable objects, applications, live streaming media, on-demand streaming media, and social media sites.

<span class="mw-page-title-main">HTTP pipelining</span> Computer communication technique

HTTP pipelining is a feature of HTTP/1.1, which allows multiple HTTP requests to be sent over a single TCP connection without waiting for the corresponding responses. HTTP/1.1 requires servers to respond to pipelined requests correctly, with non-pipelined but valid responses even if server does not support HTTP pipelining. Despite this requirement, many legacy HTTP/1.1 servers do not support pipelining correctly, forcing most HTTP clients to not use HTTP pipelining.

Link prefetching allows web browsers to pre-load resources. This speeds up both the loading and rendering of web pages. Prefetching was first introduced in HTML5.

A web accelerator is a proxy server that reduces website access time. They can be a self-contained hardware appliance or installable software.

Google Web Accelerator was a web accelerator produced by Google. It used client software installed on the user's computer, as well as data caching on Google's servers, to speed up page load times by means of data compression, prefetching of content, and sharing cached data between users. The beta, released on May 4, 2005, works with Mozilla Firefox 1.0+ and Internet Explorer 5.5+ on Windows 2000 SP3+, Windows XP, Windows Server 2003, Windows Vista and Windows 7 machines. It was discontinued in October 2008.

Time to first byte (TTFB) is a measurement used as an indication of the responsiveness of a webserver or other network resource.

WAN optimization is a collection of techniques for improving data transfer across wide area networks (WANs). In 2008, the WAN optimization market was estimated to be $1 billion, and was to grow to $4.4 billion by 2014 according to Gartner, a technology research firm. In 2015 Gartner estimated the WAN optimization market to be a $1.1 billion market.

Speedera Networks, founded in 1999, was a content delivery network (CDN) company that emerged in the late 1990s to advance technology applications for Internet communications and collaboration and became the first CDN to turn a profit. In June 2005, Akamai acquired Speedera Networks.

An application delivery network (ADN) is a suite of technologies that, when deployed together, provide availability, security, visibility, and acceleration for Internet applications such as websites. ADN components provide supporting functionality that enables website content to be delivered to visitors and other users of that website, in a fast, secure, and reliable way.

An application delivery controller (ADC) is a computer network device in a datacenter, often part of an application delivery network (ADN), that helps perform common tasks, such as those done by web accelerators to remove load from the web servers themselves. Many also provide load balancing. ADCs are often placed in the DMZ, between the outer firewall or router and a web farm.

HTTP Live Streaming is an HTTP-based adaptive bitrate streaming communications protocol developed by Apple Inc. and released in 2009. Support for the protocol is widespread in media players, web browsers, mobile devices, and streaming media servers. As of 2022, an annual video industry survey has consistently found it to be the most popular streaming format.

Web performance refers to the speed in which web pages are downloaded and displayed on the user's web browser. Web performance optimization (WPO), or website optimization is the field of knowledge about increasing web performance.

Content delivery network interconnection (CDNI) is a set of interfaces and mechanisms required for interconnecting two independent content delivery networks (CDNs) that enables one to deliver content on behalf of the other. Interconnected CDNs offer many benefits, such as footprint extension, reduced infrastructure costs, higher availability, etc., for content service providers (CSPs), CDNs, and end users. Among its many use cases, it allows small CDNs to interconnect and provides services for CSPs that allows them to compete against the CDNs of global CSPs.

References

  1. "How Dynamic Site Acceleration Works? - GlobalDots". www.globaldots.com. Archived from the original on 2013-01-21.
  2. "3 Really good reasons you should use TCP multiplexing | F5 DevCentral". Archived from the original on 2014-02-26. Retrieved 2014-05-01.
  3. "IBM Knowledge Center". www.ibm.com. Retrieved 2018-11-14.
  4. "What is Dynamic Caching | section.io". www.section.io. Retrieved 2018-11-14.
  5. "Does Cloudflare Do Prefetching?". Cloudflare Support. Retrieved 2018-11-14.
  6. "Amazon Route 53 Adds Latency Based Routing".
  7. http://www.akamai.com/dl/feature_sheets/fs_edgesuite_sureroute.pdf [ bare URL PDF ]
  8. "Choosing a Routing Policy - Amazon Route 53". docs.aws.amazon.com. Retrieved 2018-11-14.