Server farm

Last updated
A row of racks in a server farm Wikimedia-servers-Sept04.jpg
A row of racks in a server farm
This server farm supports the various computer networks of the Joint Task Force Guantanamo JTF-Guantanamo server farm.jpg
This server farm supports the various computer networks of the Joint Task Force Guantanamo

A server farm or server cluster is a collection of computer servers, usually maintained by an organization to supply server functionality far beyond the capability of a single machine. They often consist of thousands of computers which require a large amount of power to run and to keep cool. At the optimum performance level, a server farm has enormous financial and environmental costs. [1] They often include backup servers that can take over the functions of primary servers that may fail. Server farms are typically collocated with the network switches and/or routers that enable communication between different parts of the cluster and the cluster's users. Server "farmers" typically mount computers, routers, power supplies and related electronics on 19-inch racks in a server room or data center.

Contents

Applications

Server farms are commonly used for cluster computing. Many modern supercomputers comprise giant server farms of high-speed processors connected by either Ethernet or custom interconnects such as Infiniband or Myrinet. Web hosting is a common use of a server farm; such a system is sometimes collectively referred to as a web farm. Other uses of server farms include scientific simulations (such as computational fluid dynamics) and the rendering of 3D computer generated imagery (see render farm). [2]

Server farms are increasingly being used instead of or in addition to mainframe computers by large enterprises. In large server farms, the failure of an individual machine is a commonplace event: large server farms provide redundancy, automatic failover, and rapid reconfiguration of the server cluster.

Performance

The performance of the largest server farms (thousands of processors and up) is typically limited by the performance of the data center's cooling systems and the total electricity cost rather than by the processors' performance. [3] Computers in server farms run 24/7 and consume large amounts of electricity. For this reason, the critical design parameter for both large and continuous systems tends to be performance per watt rather than cost of peak performance or (peak performance / (unit * initial cost)). Also, for high availability systems that must run 24/7 (unlike supercomputers that can be power-cycled to demand, and also tend to run at much higher utilizations), there is more attention to power-saving features such as variable clock-speed and the ability to turn off both computer parts, processor parts, and entire computers (WoL and virtualization) according to demand without bringing down services. The network connecting the servers in a server farm is also an essential factor in overall performance, especially when running applications that process massive volumes of data. [4]

Performance per watt

The EEMBC EnergyBench, SPECpower, and the Transaction Processing Performance Council TPC-Energy are benchmarks designed to predict performance per watt in a server farm. [5] [6] The power used by each rack of equipment can be measured at the power distribution unit. Some servers include power tracking hardware so the people running the server farm can measure the power used by each server. [7] The power used by the entire server farm may be reported in terms of power usage effectiveness or data center infrastructure efficiency.

According to some estimates, for every 100 watts spent on running the servers, roughly another 50 watts is needed to cool them. [8] For this reason, the siting of a server farm can be as important as processor selection in achieving power efficiency. Iceland, which has a cold climate all year as well as cheap and carbon-neutral geothermal electricity supply, is building its first major server farm hosting site. [8] Fibre optic cables are being laid from Iceland to North America and Europe to enable companies there to locate their servers in Iceland. Other countries with favorable conditions, such as Canada, [9] Finland, [10] Sweden [11] and Switzerland, [12] are trying to attract cloud computing data centers. In these countries, heat from the servers can be cheaply vented or used to help heat buildings, thus reducing the energy consumption of conventional heaters. [9]

See also

Related Research Articles

<span class="mw-page-title-main">Thin client</span> Non-powerful computer optimized for remote server access

In computer networking, a thin client is a simple (low-performance) computer that has been optimized for establishing a remote connection with a server-based computing environment. They are sometimes known as network computers, or in their simplest form as zero clients. The server does most of the work, which can include launching software programs, performing calculations, and storing data. This contrasts with a rich client or a conventional personal computer; the former is also intended for working in a client–server model but has significant local processing power, while the latter aims to perform its function mostly locally.

<span class="mw-page-title-main">Supercomputer</span> Type of extremely powerful computer

A supercomputer is a computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, supercomputers have existed which can perform over 1017 FLOPS (a hundred quadrillion FLOPS, 100 petaFLOPS or 100 PFLOPS). For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013). Since November 2017, all of the world's fastest 500 supercomputers run on Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.

<span class="mw-page-title-main">Server (computing)</span> Computer to access a central resource or service on a network

In computing, a server is a piece of computer hardware or software that provides functionality for other programs or devices, called "clients". This architecture is called the client–server model. Servers can provide various functionalities, often called "services", such as sharing data or resources among multiple clients or performing computations for a client. A single server can serve multiple clients, and a single client can use multiple servers. A client process may run on the same device or may connect over a network to a server on a different device. Typical servers are database servers, file servers, mail servers, print servers, web servers, game servers, and application servers.

<span class="mw-page-title-main">Energy Star</span> US energy efficiency program

Energy Star is a program run by the U.S. Environmental Protection Agency (EPA) and U.S. Department of Energy (DOE) that promotes energy efficiency. The program provides information on the energy consumption of products and devices using different standardized methods. The Energy Star label is found on more than 75 different certified product categories, homes, commercial buildings, and industrial plants. In the United States, the Energy Star label is also shown on the Energy Guide appliance label of qualifying products.

<span class="mw-page-title-main">Data center</span> Building or room used to house computer servers and related equipment

A data center or data centre is a building, a dedicated space within a building, or a group of buildings used to house computer systems and associated components, such as telecommunications and storage systems.

<span class="mw-page-title-main">High-performance computing</span> Computing with supercomputers and clusters

High-performance computing (HPC) uses supercomputers and computer clusters to solve advanced computation problems.

The thermal design power (TDP), sometimes called thermal design point, is the maximum amount of heat generated by a computer chip or component that the cooling system in a computer is designed to dissipate under any workload.

Green computing, green IT, or ICT sustainability, is the study and practice of environmentally sustainable computing or IT.

<span class="mw-page-title-main">Benchmark (computing)</span> Comparing the relative performance of computers by running the same program on all of them

In computing, a benchmark is the act of running a computer program, a set of programs, or other operations, in order to assess the relative performance of an object, normally by running a number of standard tests and trials against it.

<span class="mw-page-title-main">Urs Hölzle</span> Swiss computer scientist

Urs Hölzle is a Swiss software engineer and technology executive. As Google's eighth employee and its first VP of Engineering, he has shaped much of Google's development processes and infrastructure, as well as its engineering culture. His most notable contributions include leading the development of fundamental cloud infrastructure such as energy-efficient data centers, distributed compute and storage systems, and software-defined networking. Until July 2023, he was the Senior Vice President of Technical Infrastructure and Google Fellow at Google. In July 2023, he transitioned to being a Google Fellow only.

<span class="mw-page-title-main">Microserver</span>

A data center 64 bit microserver is a server class computer which is based on a system on a chip (SoC). The goal is to integrate all of the server motherboard functions onto a single microchip, except DRAM, boot FLASH and power circuits. Thus, the main chip contains more than only compute cores, caches, memory interfaces and PCI controllers. It typically also contains SATA, networking, serial port and boot FLASH interfaces on the same chip. This eliminates support chips at the board level. Multiple microservers can be put together in a small package to construct dense data center.

In computing, computer performance is the amount of useful work accomplished by a computer system. Outside of specific contexts, computer performance is estimated in terms of accuracy, efficiency and speed of executing computer program instructions. When it comes to high computer performance, one or more of the following factors might be involved:

In computing, performance per watt is a measure of the energy efficiency of a particular computer architecture or computer hardware. Literally, it measures the rate of computation that can be delivered by a computer for every watt of power consumed. This rate is typically measured by performance on the LINPACK benchmark when trying to compare between computing systems: an example using this is the Green500 list of supercomputers. Performance per watt has been suggested to be a more sustainable measure of computing than Moore’s Law.

Power usage effectiveness (PUE) is a ratio that describes how efficiently a computer data center uses energy; specifically, how much energy is used by the computing equipment.

IT energy management or Green IT is the analysis and management of energy demand within the Information Technology department in any organization. IT energy demand accounts for approximately 2% of global CO2 emissions, approximately the same level as aviation, and represents over 10% of all the global energy consumption. IT can account for 25% of a modern office building's energy cost.

<span class="mw-page-title-main">Aquasar</span> Supercomputer system from IBM Research

Aquasar is a supercomputer prototype created by IBM Labs in collaboration with ETH Zurich in Zürich, Switzerland and ETH Lausanne in Lausanne, Switzerland. While most supercomputers use air as their coolant of choice, the Aquasar uses hot water to achieve its great computing efficiency. Along with using hot water as the main coolant, an air-cooled section is also included to be used to compare the cooling efficiency of both coolants. The comparison could later be used to help improve the hot water coolant's performance. The research program was first termed to be: "Direct use of waste heat from liquid-cooled supercomputers: the path to energy saving, emission-high performance computers and data centers." The waste heat produced by the cooling system is able to be recycled back in the building's heating system, potentially saving money. Beginning in 2009, the three-year collaborative project was introduced and developed in the interest of saving energy and being environmentally-safe while delivering top-tier performance.

Energy Logic is a vendor-neutral approach to achieving energy efficiency in data centers. Developed and initially released in 2007, the Energy Logic efficiency model suggests ten holistic actions – encompassing IT equipment as well as traditional data center infrastructure – guided by the principles dictated by the "Cascade Effect."

Cloud computing has become a social phenomenon used by most people every day. As with every important social phenomenon there are issues that limit its widespread adoption. In the present scenario, cloud computing is seen as a fast developing area that can instantly supply extensible services by using internet with the help of hardware and software virtualization. The biggest advantage of cloud computing is flexible lease and release of resources as per the requirement of the user. Other benefits encompass betterment in efficiency, compensating the costs in operations and management. It curtails down the high prices of hardware and software

Computation offloading is the transfer of resource intensive computational tasks to a separate processor, such as a hardware accelerator, or an external platform, such as a cluster, grid, or a cloud. Offloading to a coprocessor can be used to accelerate applications including: image rendering and mathematical calculations. Offloading computing to an external platform over a network can provide computing power and overcome hardware limitations of a device, such as limited computational power, storage, and energy.

In computing, energy proportionality is a measure of the relationship between power consumed in a computer system, and the rate at which useful work is done. If the overall power consumption is proportional to the computer's utilization, then the machine is said to be energy proportional. Equivalently stated, for an idealized energy proportional computer, the overall energy per operation is constant for all possible workloads and operating conditions.

References

  1. Mitrani, Isa (January 2013). "Managing performance and power consumption in a server farm". Annals of Operations Research. 202 (1): 121–122. doi:10.1007/s10479-011-0932-1. S2CID   12276102.
  2. "What is a render farm". GarageFarm. 2021-06-11. Retrieved 2021-06-11.
  3. "Luiz André Barroso". Barroso.org. doi: 10.2200/S00193ED1V01Y200905CAC006 . Retrieved 2012-09-20.{{cite journal}}: Cite journal requires |journal= (help)
  4. Noormohammadpour, Mohammad; Raghavendra, Cauligi (16 July 2018). "Datacenter Traffic Control: Understanding Techniques and Tradeoffs". IEEE Communications Surveys & Tutorials. 20 (2): 1492–1525. arXiv: 1712.03530 . doi:10.1109/COMST.2017.2782753. S2CID   28143006.
  5. "TPC describes upcoming server power efficiency benchmark – Server Farming". Itknowledgeexchange.techtarget.com. 2009-02-19. Archived from the original on 2012-02-20. Retrieved 2012-09-20.
  6. "TPC eyes energy consumption and virtualization benchmarks". Searchdatacenter.techtarget.com. 2008-11-06. Archived from the original on 2009-09-30. Retrieved 2012-09-20.
  7. Rich MillerApril 1st, 2009 (2009-04-01). "Efficient UPS Aids Google's Extreme PUE". Data Center Knowledge. Retrieved 2012-09-20.{{cite web}}: CS1 maint: numeric names: authors list (link)
  8. 1 2 "Iceland looks to serve the world". BBC News. 2009-10-09. Retrieved 2009-10-15.
  9. 1 2 "Cold front: Can Canada play a leading role in the cloud?". ChannelBuzz.ca. 2010-12-08. Retrieved 2012-09-20.
  10. "Finland – First Choice for Siting Your Cloud Computing Data Center". Fincloud.freehostingcloud.com. 2010-12-08. Retrieved 2012-09-20.
  11. Archived August 19, 2010, at the Wayback Machine
  12. Wheeland, Matthew (2010-06-30). "Swiss Carbon-Neutral Servers Hit the Cloud". GreenBiz.com. Retrieved 2012-09-20.