A server farm or server cluster is a collection of computer servers, usually maintained by an organization to supply server functionality far beyond the capability of a single machine. They often consist of thousands of computers which require a large amount of power to run and to keep cool. At the optimum performance level, a server farm has enormous financial and environmental costs. [1] They often include backup servers that can take over the functions of primary servers that may fail. Server farms are typically collocated with the network switches and/or routers that enable communication between different parts of the cluster and the cluster's users. Server "farmers" typically mount computers, routers, power supplies and related electronics on 19-inch racks in a server room or data center.
Server farms are commonly used for cluster computing. Many modern supercomputers comprise giant server farms of high-speed processors connected by either Ethernet or custom interconnects such as Infiniband or Myrinet. Web hosting is a common use of a server farm; such a system is sometimes collectively referred to as a web farm. Other uses of server farms include scientific simulations (such as computational fluid dynamics) and the rendering of 3D computer generated imagery (see render farm). [2]
Server farms are increasingly being used instead of or in addition to mainframe computers by large enterprises. In large server farms, the failure of an individual machine is a commonplace event: large server farms provide redundancy, automatic failover, and rapid reconfiguration of the server cluster.
The performance of the largest server farms (thousands of processors and up) is typically limited by the performance of the data center's cooling systems and the total electricity cost rather than by the processors' performance. [3] Computers in server farms run 24/7 and consume large amounts of electricity. For this reason, the critical design parameter for both large and continuous systems tends to be performance per watt rather than cost of peak performance or (peak performance / (unit * initial cost)). Also, for high availability systems that must run 24/7 (unlike supercomputers that can be power-cycled to demand, and also tend to run at much higher utilizations), there is more attention to power-saving features such as variable clock-speed and the ability to turn off both computer parts, processor parts, and entire computers (WoL and virtualization) according to demand without bringing down services. The network connecting the servers in a server farm is also an essential factor in overall performance, especially when running applications that process massive volumes of data. [4]
The EEMBC EnergyBench, SPECpower, and the Transaction Processing Performance Council TPC-Energy are benchmarks designed to predict performance per watt in a server farm. [5] [6] The power used by each rack of equipment can be measured at the power distribution unit. Some servers include power tracking hardware so the people running the server farm can measure the power used by each server. [7] The power used by the entire server farm may be reported in terms of power usage effectiveness or data center infrastructure efficiency.
According to some estimates, for every 100 watts spent on running the servers, roughly another 50 watts is needed to cool them. [8] For this reason, the siting of a server farm can be as important as processor selection in achieving power efficiency. Iceland, which has a cold climate all year as well as cheap and carbon-neutral geothermal electricity supply, is building its first major server farm hosting site. [8] Fibre optic cables are being laid from Iceland to North America and Europe to enable companies there to locate their servers in Iceland. Other countries with favorable conditions, such as Canada, [9] Finland, [10] Sweden [11] and Switzerland, [12] are trying to attract cloud computing data centers. In these countries, heat from the servers can be cheaply vented or used to help heat buildings, thus reducing the energy consumption of conventional heaters. [9]
In computer networking, a thin client, sometimes called slim client or lean client, is a simple (low-performance) computer that has been optimized for establishing a remote connection with a server-based computing environment. They are sometimes known as network computers, or in their simplest form as zero clients. The server does most of the work, which can include launching software programs, performing calculations, and storing data. This contrasts with a rich client or a conventional personal computer; the former is also intended for working in a client–server model but has significant local processing power, while the latter aims to perform its function mostly locally.
A supercomputer is a type of computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, supercomputers have existed which can perform over 1017 FLOPS (a hundred quadrillion FLOPS, 100 petaFLOPS or 100 PFLOPS). For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013). Since November 2017, all of the world's fastest 500 supercomputers run on Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.
A server is a computer that provides information to other computers called "clients" on computer network. This architecture is called the client–server model. Servers can provide various functionalities, often called "services", such as sharing data or resources among multiple clients or performing computations for a client. A single server can serve multiple clients, and a single client can use multiple servers. A client process may run on the same device or may connect over a network to a server on a different device. Typical servers are database servers, file servers, mail servers, print servers, web servers, game servers, and application servers.
In computer science, algorithmic efficiency is a property of an algorithm which relates to the amount of computational resources used by the algorithm. Algorithmic efficiency can be thought of as analogous to engineering productivity for a repeating or continuous process.
Energy Star is a program run by the U.S. Environmental Protection Agency (EPA) and U.S. Department of Energy (DOE) that promotes energy efficiency. The program provides information on the energy consumption of products and devices using different standardized methods. The Energy Star label is found on more than 75 different certified product categories, homes, commercial buildings, and industrial plants. In the United States, the Energy Star label is also shown on the Energy Guide appliance label of qualifying products.
A data center or data centre is a building, a dedicated space within a building, or a group of buildings used to house computer systems and associated components, such as telecommunications and storage systems.
High-performance computing (HPC) uses supercomputers and computer clusters to solve advanced computation problems.
The thermal design power (TDP), sometimes called thermal design point, is the maximum amount of heat generated by a computer chip or component that the cooling system in a computer is designed to dissipate under any workload.
Green computing, green IT, or ICT sustainability, is the study and practice of environmentally sustainable computing or IT.
In computing, a benchmark is the act of running a computer program, a set of programs, or other operations, in order to assess the relative performance of an object, normally by running a number of standard tests and trials against it.
Urs Hölzle is a Swiss software engineer and technology executive. As Google's eighth employee and its first VP of Engineering, he has shaped much of Google's development processes and infrastructure, as well as its engineering culture. His most notable contributions include leading the development of fundamental cloud infrastructure such as energy-efficient data centers, distributed compute and storage systems, and software-defined networking. Until July 2023, he was the Senior Vice President of Technical Infrastructure and Google Fellow at Google. In July 2023, he transitioned to being a Google Fellow only.
A data center 64 bit microserver is a server class computer which is based on a system on a chip (SoC). The goal is to integrate all of the server motherboard functions onto a single microchip, except DRAM, boot FLASH and power circuits. Thus, the main chip contains more than only compute cores, caches, memory interfaces and PCI controllers. It typically also contains SATA, networking, serial port and boot FLASH interfaces on the same chip. This eliminates support chips at the board level. Multiple microservers can be put together in a small package to construct dense data center.
In computing, computer performance is the amount of useful work accomplished by a computer system. Outside of specific contexts, computer performance is estimated in terms of accuracy, efficiency and speed of executing computer program instructions. When it comes to high computer performance, one or more of the following factors might be involved:
In computing, performance per watt is a measure of the energy efficiency of a particular computer architecture or computer hardware. Literally, it measures the rate of computation that can be delivered by a computer for every watt of power consumed. This rate is typically measured by performance on the LINPACK benchmark when trying to compare between computing systems: an example using this is the Green500 list of supercomputers. Performance per watt has been suggested to be a more sustainable measure of computing than Moore's Law.
Power usage effectiveness (PUE) or power unit efficiency is a ratio that describes how efficiently a computer data center uses energy; specifically, how much energy is used by the computing equipment.
IT energy management or Green IT is the analysis and management of energy demand within the Information Technology department in any organization. IT energy demand accounts for approximately 2% of global CO2 emissions, approximately the same level as aviation, and represents over 10% of all the global energy consumption. IT can account for 25% of a modern office building's energy cost.
Computation offloading is the transfer of resource intensive computational tasks to a separate processor, such as a hardware accelerator, or an external platform, such as a cluster, grid, or a cloud. Offloading to a coprocessor can be used to accelerate applications including: image rendering and mathematical calculations. Offloading computing to an external platform over a network can provide computing power and overcome hardware limitations of a device, such as limited computational power, storage, and energy.
In computing, energy proportionality is a measure of the relationship between power consumed in a computer system, and the rate at which useful work is done. If the overall power consumption is proportional to the computer's utilization, then the machine is said to be energy proportional. Equivalently stated, for an idealized energy proportional computer, the overall energy per operation is constant for all possible workloads and operating conditions.
The industrial internet of things (IIoT) refers to interconnected sensors, instruments, and other devices networked together with computers' industrial applications, including manufacturing and energy management. This connectivity allows for data collection, exchange, and analysis, potentially facilitating improvements in productivity and efficiency as well as other economic benefits. The IIoT is an evolution of a distributed control system (DCS) that allows for a higher degree of automation by using cloud computing to refine and optimize the process controls.
A green data center, or sustainable data center, is a service facility which utilizes energy-efficient technologies. They do not contain obsolete systems, and take advantage of newer, more efficient technologies.
{{cite journal}}
: Cite journal requires |journal=
(help){{cite web}}
: CS1 maint: numeric names: authors list (link)