Server hog

Last updated

A server hog is a user, program or system that places excessive load on a server such that the server performance as experienced by other clients is degraded, or such that the server itself is so heavily loaded that it fails to perform routine housekeeping for its own maintenance.

Contents

History

In the early years of time-sharing computer systems in the 1960s it was common for a single institutional mainframe to control many interactive terminals. In such an environment server lag is acutely perceived. Furthermore, in many operating environments, scarce server resources such as CPU-seconds were often metered and charged against the account of the user running the program. An unintentional server hog could prove extremely costly in financial terms. These programs were often called run-away programs or endless loops.

Resource contention

Server performance has many dimensions. Any subsystem that becomes excessively loaded can compromise the performance of other clients contending for that subsystem. Common forms of hardware contention include CPU cycles, interrupt latency, I/O bandwidth, available system memory, or aggregate system memory bandwidth. At the software level, contention can arise for buffers, queues, spools, or page tables.

Known hogs

It is an accepted practice that servers are appropriately sized by system administrators for the workload (or mixture of workloads) expected, and server performance is closely monitored to establish performance baselines. The server load might include well known server hogs, such as system backup. These tasks are generally scheduled for time periods of light demand, such as in the very early hours on a Sunday morning, with an accepted administrative policy to discourage or prohibit other demands on the server during those time periods.

Unexpected hogs

More often, the term server hog is used to designate an unusual load condition where the server performance falls short of the culturally accepted baseline. A common scenario in the early years of computing was an overload condition known as thrashing where the aggregate server performance becomes severely degraded, such as when two departments of a large company attempt to run a heavy report concurrently on the same mainframe. In such a situation, the designation of the server hog becomes a political matter of pointing fingers, as the termination of either long-running report would restore the server to normal performance.

Internet era

In the internet era, the nature of server loads greatly changed, as the clients became increasingly dispersed geographically, and often increasingly anonymous, as for example, any member of the public with internet access can request a web server in any part of the world to deliver a web page. In this context, a server hog most commonly designates a malicious server hog—a program written expressly for the purpose of overloading a remote server with excessive requests or excessively difficult requests (such as complex search). Use of a deliberate server hog is known as a denial-of-service attack, a behaviour exhibited by many viruses, worms and trojan horses. It is also possible for a petulant or vindictive computer user to manually overload a remote server by unleashing a crap flood.

Bots

A special case is that of a run-away bot, a program that was designed to be helpful by automating a drudgerous task, but due to poor programming or poorly understood circumstances, goes out of control and hammers a server unceasingly at a high rate. A common case is a web spider which accesses too many pages on a web server too quickly at the expense of the server's intended audience.

Related Research Articles

Client–server model Distributed application structure in computing

Client–server model is a distributed application structure that partitions tasks or workloads between the providers of a resource or service, called servers, and service requesters, called clients. Often clients and servers communicate over a computer network on separate hardware, but both client and server may reside in the same system. A server host runs one or more server programs, which share their resources with clients. A client usually does not share any of its resources, but it requests content or service from a server. Clients, therefore, initiate communication sessions with servers, which await incoming requests. Examples of computer applications that use the client–server model are email, network printing, and the World Wide Web.

Thin client Non-powerful computer optimized for remote server access

In computer networking, a thin client is a simple (low-performance) computer that has been optimized for establishing a remote connection with a server-based computing environment. The server does most of the work, which can include launching software programs, performing calculations, and storing data. This contrasts with a fat client or a conventional personal computer; the former is also intended for working in a client–server model but has significant local processing power, while the latter aims to perform its function mostly locally.

Web server Computer software that distributes web pages

A web server is computer software and underlying hardware that accepts requests via HTTP, the network protocol created to distribute web pages, or its secure variant HTTPS. A user agent, commonly a web browser or web crawler, initiates communication by making a request for a specific resource using HTTP, and the server responds with the content of that resource or an error message. The server can also accept and store resources sent from the user agent if configured to do so.

Computerized batch processing is the running of "jobs that can run without end user interaction, or can be scheduled to run as resources permit."

In software quality assurance, performance testing is in general a testing practice performed to determine how a system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.

Load balancing (computing) Set of techniques to improve the distribution of workloads across multiple computing resources

In computing, load balancing refers to the process of distributing a set of tasks over a set of resources, with the aim of making their overall processing more efficient. Load balancing can optimize the response time and avoid unevenly overloading some compute nodes while other compute nodes are left idle.

Proxy server Computer server that makes and receives requests on behalf of a user

In computer networking, a proxy server is a server application that acts as an intermediary between a client requesting a resource and the server providing that resource.

CICS IBM mainframe transaction monitor

IBM CICS is a family of mixed-language application servers that provide online transaction management and connectivity for applications on IBM mainframe systems under z/OS and z/VSE.

Linux Terminal Server Project (LTSP) is a free and open source terminal server for Linux that allows many people to simultaneously use the same computer. Applications run on the server with a terminal known as a thin client handling input and output. Generally, terminals are low-powered, lack a hard disk and are quieter and more reliable than desktop computers because they do not have any moving parts.

Diskless node

A diskless node is a workstation or personal computer without disk drives, which employs network booting to load its operating system from a server.

In computing, a benchmark is the act of running a computer program, a set of programs, or other operations, in order to assess the relative performance of an object, normally by running a number of standard tests and trials against it. The term benchmark is also commonly utilized for the purposes of elaborately designed benchmarking programs themselves.

A job scheduler is a computer application for controlling unattended background program execution of jobs. This is commonly called batch scheduling, as execution of non-interactive jobs is often called batch processing, though traditional job and batch are distinguished and contrasted; see that page for details. Other synonyms include batch system, distributed resource management system (DRMS), distributed resource manager (DRM), and, commonly today, workload automation (WLA). The data structure of jobs to run is known as the job queue.

Capacity management's goal is to ensure that information technology resources are sufficient to meet upcoming business requirements cost-effictively. One common interpretation of capacity management is described in the ITIL framework. ITIL version 3 views capacity management as comprising three sub-processes: business capacity management, service capacity management, and component capacity management.

Time to first byte (TTFB) is a measurement used as an indication of the responsiveness of a webserver or other network resource.

nmon System monitor tool for the AIX and Linux operating systems

nmon is a computer performance system monitor tool for the AIX and Linux operating systems. The nmon tool has two modes a) displays the performance stats on-screen in a condensed format or b) the same stats are saved to a comma-separated values (CSV) data file for later graphing and analysis to aid the understanding of computer resource use, tuning options and bottlenecks.

Noop scheduler

The NOOP scheduler is the simplest I/O scheduler for the Linux kernel. This scheduler was developed by Jens Axboe.

In IBM mainframes, Workload Manager (WLM) is a base component of MVS/ESA mainframe operating system, and its successors up to and including z/OS. It controls the access to system resources for the work executing on z/OS based on administrator-defined goals. Workload Manager components also exist for other operating systems. For example, an IBM Workload Manager is also a software product for AIX operating system.

Web server benchmarking is the process of estimating a web server performance in order to find if the server can serve sufficiently high workload.

HiperDispatch is a workload dispatching feature found in the newest IBM mainframe models running recent releases of z/OS. HiperDispatch was introduced in February 2008. Support was added to z/VM in its V6R3 release on July 26, 2013.

Classes of computers

Computers can be classified, or typed, in many ways. Some common classifications of computers are given below.

References