Server hog

Last updated

Server (or resource) hogging is where a user, program or system places excessive load on a server. "Hogging" aims to significantly degrade performance experienced for clients so that the server & resources itself are so heavily loaded that it fails to perform routine functions.

Contents

History

In the early years of time-sharing computer systems in the 1960s it was common for a single institutional mainframe to control many interactive terminals. In such an environment server lag is acutely perceived. Furthermore, in many operating environments, scarce server resources such as CPU-seconds were often metered and charged against the account of the user running the program. An unintentional server hog could prove extremely costly in financial terms. These programs were often called run-away programs or endless loops.

Resource contention

Server performance has many dimensions. Any subsystem that becomes excessively loaded can compromise the performance of other clients contending for that subsystem. Common forms of hardware contention include CPU cycles, interrupt latency, I/O bandwidth, available system memory, or aggregate system memory bandwidth. At the software level, contention can arise for buffers, queues, spools, or page tables.

Known hogs

It is an accepted practice that servers are appropriately sized by system administrators for the workload (or mixture of workloads) expected, and server performance is closely monitored to establish performance baselines. The server load might include well known server hogs, such as system backup. These tasks are generally scheduled for time periods of light demand, such as in the very early hours on a Sunday morning, with an accepted administrative policy to discourage or prohibit other demands on the server during those time periods.

Unexpected hogs

More often, the term server hog is used to designate an unusual load condition where the server performance falls short of the culturally accepted baseline. A common scenario in the early years of computing was an overload condition known as thrashing where the aggregate server performance becomes severely degraded, such as when two departments of a large company attempt to run a heavy report concurrently on the same mainframe. In such a situation, the designation of the server hog becomes a political matter of pointing fingers, as the termination of either long-running report would restore the server to normal performance.

Internet era

In the internet era, the nature of server loads greatly changed, as the clients became increasingly dispersed geographically, and often increasingly anonymous, as for example, any member of the public with internet access can request a web server in any part of the world to deliver a web page. In this context, a server hog most commonly designates a malicious server hog—a program written expressly for the purpose of overloading a remote server with excessive requests or excessively difficult requests (such as complex search). Use of a deliberate server hog is known as a denial-of-service attack, a behaviour exhibited by many viruses, worms and trojan horses. It is also possible for a petulant or vindictive computer user to manually overload a remote server by unleashing a crap flood.

Bots

A special case is that of a run-away bot, a program that was designed to be helpful by automating a drudgerous task, but due to poor programming or poorly understood circumstances, goes out of control and hammers a server unceasingly at a high rate. A common case is a web spider which accesses too many pages on a web server too quickly at the expense of the server's intended audience.

Related Research Articles

<span class="mw-page-title-main">Client–server model</span> Distributed application structure in computing

The client–server model is a distributed application structure that partitions tasks or workloads between the providers of a resource or service, called servers, and service requesters, called clients. Often clients and servers communicate over a computer network on separate hardware, but both client and server may reside in the same system. A server host runs one or more server programs, which share their resources with clients. A client usually does not share any of its resources, but it requests content or service from a server. Clients, therefore, initiate communication sessions with servers, which await incoming requests. Examples of computer applications that use the client–server model are email, network printing, and the World Wide Web.

<span class="mw-page-title-main">Cache (computing)</span> Additional storage that enables faster access to main storage

In computing, a cache is a hardware or software component that stores data so that future requests for that data can be served faster; the data stored in a cache might be the result of an earlier computation or a copy of data stored elsewhere. A cache hit occurs when the requested data can be found in a cache, while a cache miss occurs when it cannot. Cache hits are served by reading data from the cache, which is faster than recomputing a result or reading from a slower data store; thus, the more requests that can be served from the cache, the faster the system performs.

<span class="mw-page-title-main">Mainframe computer</span> Large computer

A mainframe computer, informally called a mainframe or big iron, is a computer used primarily by large organizations for critical applications like bulk data processing for tasks such as censuses, industry and consumer statistics, enterprise resource planning, and large-scale transaction processing. A mainframe computer is large but not as large as a supercomputer and has more processing power than some other classes of computers, such as minicomputers, servers, workstations, and personal computers. Most large-scale computer-system architectures were established in the 1960s, but they continue to evolve. Mainframe computers are often used as servers.

<span class="mw-page-title-main">Thin client</span> Non-powerful computer optimized for remote server access

In computer networking, a thin client, sometimes called slim client or lean client, is a simple (low-performance) computer that has been optimized for establishing a remote connection with a server-based computing environment. They are sometimes known as network computers, or in their simplest form as zero clients. The server does most of the work, which can include launching software programs, performing calculations, and storing data. This contrasts with a rich client or a conventional personal computer; the former is also intended for working in a client–server model but has significant local processing power, while the latter aims to perform its function mostly locally.

<span class="mw-page-title-main">Web server</span> Computer software that distributes web pages

A web server is computer software and underlying hardware that accepts requests via HTTP or its secure variant HTTPS. A user agent, commonly a web browser or web crawler, initiates communication by making a request for a web page or other resource using HTTP, and the server responds with the content of that resource or an error message. A web server can also accept and store resources sent from the user agent if configured to do so.

Computerized batch processing is a method of running software programs called jobs in batches automatically. While users are required to submit the jobs, no other interaction by the user is required to process the batch. Batches may automatically be run at scheduled times as well as being run contingent on the availability of computer resources.

In software quality assurance, performance testing is in general a testing practice performed to determine how a system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.

<span class="mw-page-title-main">Load balancing (computing)</span> Set of techniques to improve the distribution of workloads across multiple computing resources

In computing, load balancing is the process of distributing a set of tasks over a set of resources, with the aim of making their overall processing more efficient. Load balancing can optimize response time and avoid unevenly overloading some compute nodes while other compute nodes are left idle.

<span class="mw-page-title-main">Proxy server</span> Computer server that makes and receives requests on behalf of a user

In computer networking, a proxy server is a server application that acts as an intermediary between a client requesting a resource and the server providing that resource. It improves privacy, security, and performance in the process.

Linux Terminal Server Project (LTSP) is a free and open-source terminal server for Linux that allows many people to simultaneously use the same computer. Applications run on the server with a terminal known as a thin client handling input and output. Generally, terminals are low-powered, lack a hard disk and are quieter and more reliable than desktop computers because they do not have any moving parts.

<span class="mw-page-title-main">Diskless node</span> Computer workstation operated without disk drives

A diskless node is a workstation or personal computer without disk drives, which employs network booting to load its operating system from a server.

<span class="mw-page-title-main">Benchmark (computing)</span> Comparing the relative performance of computers by running the same program on all of them

In computing, a benchmark is the act of running a computer program, a set of programs, or other operations, in order to assess the relative performance of an object, normally by running a number of standard tests and trials against it.

IBM Z Family of mainframe computers

IBM Z is a family name used by IBM for all of its z/Architecture mainframe computers. In July 2017, with another generation of products, the official family was changed to IBM Z from IBM z Systems; the IBM Z family now includes the newest model, the IBM z16, as well as the z15, the z14, and the z13, the IBM zEnterprise models, the IBM System z10 models, the IBM System z9 models and IBM eServer zSeries models.

nmon System monitor tool for the AIX and Linux operating systems

nmon is a computer performance system monitor tool for the AIX and Linux operating systems. The nmon tool has two modes a) displays the performance stats on-screen in a condensed format or b) the same stats are saved to a comma-separated values (CSV) data file for later graphing and analysis to aid the understanding of computer resource use, tuning options and bottlenecks.

<span class="mw-page-title-main">Noop scheduler</span> Simple I/O scheduler for the Linux kernel

The NOOP scheduler is the simplest I/O scheduler for the Linux kernel. This scheduler was developed by Jens Axboe.

In IBM mainframes, Workload Manager (WLM) is a base component of MVS/ESA mainframe operating system, and its successors up to and including z/OS. It controls the access to system resources for the work executing on z/OS based on administrator-defined goals. Workload Manager components also exist for other operating systems. For example, an IBM Workload Manager is also a software product for AIX operating system.

Web server benchmarking is the process of estimating a web server performance in order to find if the server can serve sufficiently high workload.

HiperDispatch is a workload dispatching feature found in recent IBM mainframe models running recent releases of z/OS. HiperDispatch was introduced in February 2008. Support was added to z/VM in its V6R3 release on July 26, 2013.

<span class="mw-page-title-main">Classes of computers</span>

Computers can be classified, or typed, in many ways. Some common classifications of computers are given below.

Computation offloading is the transfer of resource intensive computational tasks to a separate processor, such as a hardware accelerator, or an external platform, such as a cluster, grid, or a cloud. Offloading to a coprocessor can be used to accelerate applications including: image rendering and mathematical calculations. Offloading computing to an external platform over a network can provide computing power and overcome hardware limitations of a device, such as limited computational power, storage, and energy.

References