Data furnace

Last updated

The data furnace is a method [1] of heating residential homes or offices by running computers in them, which release considerable amounts of waste heat. Data furnaces can theoretically be cheaper than storing computers in huge data centers because the higher cost of electricity in residential areas (when compared to industrial zones) can be offset by charging the home owner for the heat that the data center gives off. Some large companies that store and process thousands of gigabytes of data believe that data furnaces could be cheaper because there would be little to no overhead costs. The cost of a traditional data storage center is up to around $400 per server, [2] whereas the overhead cost per server of a home data furnace is around $10. Individuals had already begun using computers as a heat source by 2011. [2]

Contents

Usefulness

The first kind of data furnace (DF) could be a low cost seasonal DF. This kind of DF would use an existing broadband connection to perform delay-tolerant jobs such as content indexing or the processing of large sets of scientific data. [2] The server will only come on and start heating and processing when the house needs heat. The second kind of DF would be the low bandwidth neighborhood DF. This option can provide faster computations as it can run at all times, but this increases the risk of overheating. To get around this problem there may be vents to the outside added to the server racks to get rid of some of the unneeded heat. The third option would be an eco-friendly urban DF. This option, much like the second, runs year round and can vent excess heat to the outside. This would be an advantage for service providers to expand into urban areas more quickly, so long as the applications scale to the number of servers. This option causes a new challenge, because since it runs year round, the cost of electricity to run the servers cannot be offset by billing the home owners for the heat that they use as it will be little to none.

Energy requirements

For a data furnace heating water, the heating needs to be at least 56°C/ 133°F to prevent the development of pathogens while limiting the risks of skin. Regarding space heating radiators, a temperature of 50-60°C/122-140°F is suitable for a radiator embedding processors as long as the heating surface is of significant size to dissipate the heat. [3] [ dubious ]

Security

There are concerns about the security of these servers, as they would be stored on private properties unmonitored. Unlike traditional data centers that are constantly monitored, data furnaces should be treated as the most insecure environment for data storage. For the best security, each server would have a device to prevent tampering. Furthermore, all of the data on these servers would have to be encrypted so that no one except the person requesting the data would have access to it. [4]

Applications

A few companies around the world are commercialising this concept around the world. A German company Cloud&Heat offers hot water heated by a distributed data center installed in the premises. [5] French company Qarnot computing developed a radiator that heats with embedded processors and sells the computing power generated. [6]

Further reading

Related Research Articles

<span class="mw-page-title-main">Cache (computing)</span> Additional storage that enables faster access to main storage

In computing, a cache is a hardware or software component that stores data so that future requests for that data can be served faster; the data stored in a cache might be the result of an earlier computation or a copy of data stored elsewhere. A cache hit occurs when the requested data can be found in a cache, while a cache miss occurs when it cannot. Cache hits are served by reading data from the cache, which is faster than recomputing a result or reading from a slower data store; thus, the more requests that can be served from the cache, the faster the system performs.

<span class="mw-page-title-main">Thin client</span> Non-powerful computer optimized for remote server access

In computer networking, a thin client is a simple (low-performance) computer that has been optimized for establishing a remote connection with a server-based computing environment. They are sometimes known as network computers, or in their simplest form as zero clients. The server does most of the work, which can include launching software programs, performing calculations, and storing data. This contrasts with a rich client or a conventional personal computer; the former is also intended for working in a client–server model but has significant local processing power, while the latter aims to perform its function mostly locally.

<span class="mw-page-title-main">Passive solar building design</span> Architectural engineering that uses the Suns heat without electric or mechanical systems

In passive solar building design, windows, walls, and floors are made to collect, store, reflect, and distribute solar energy, in the form of heat in the winter and reject solar heat in the summer. This is called passive solar design because, unlike active solar heating systems, it does not involve the use of mechanical and electrical devices.

<span class="mw-page-title-main">Server farm</span> Collection of computer servers

A server farm or server cluster is a collection of computer servers, usually maintained by an organization to supply server functionality far beyond the capability of a single machine. They often consist of thousands of computers which require a large amount of power to run and to keep cool. At the optimum performance level, a server farm has enormous financial and environmental costs. They often include backup servers that can take over the functions of primary servers that may fail. Server farms are typically collocated with the network switches and/or routers that enable communication between different parts of the cluster and the cluster's users. Server "farmers" typically mount computers, routers, power supplies and related electronics on 19-inch racks in a server room or data center.

<span class="mw-page-title-main">Data center</span> Building or room used to house computer servers and related equipment

A data center or data centre is a building, a dedicated space within a building, or a group of buildings used to house computer systems and associated components, such as telecommunications and storage systems.

<span class="mw-page-title-main">Central heating</span> Type of heating system

A central heating system provides warmth to a number of spaces within a building from one main source of heat. It is a component of heating, ventilation, and air conditioning systems, which can both cool and warm interior spaces.

<span class="mw-page-title-main">Blade server</span> Server computer that uses less energy and space than a conventional server

A blade server is a stripped-down server computer with a modular design optimized to minimize the use of physical space and energy. Blade servers have many components removed to save space, minimize power consumption and other considerations, while still having all the functional components to be considered a computer. Unlike a rack-mount server, a blade server fits inside a blade enclosure, which can hold multiple blade servers, providing services such as power, cooling, networking, various interconnects and management. Together, blades and the blade enclosure form a blade system, which may itself be rack-mounted. Different blade providers have differing principles regarding what to include in the blade itself, and in the blade system as a whole.

Green computing, green IT, or ICT sustainability, is the study and practice of environmentally sustainable computing or IT.

MapReduce is a programming model and an associated implementation for processing and generating big data sets with a parallel, distributed algorithm on a cluster.

Utility computing, or computer utility, is a service provisioning model in which a service provider makes computing resources and infrastructure management available to the customer as needed, and charges them for specific usage rather than a flat rate. Like other types of on-demand computing, the utility model seeks to maximize the efficient use of resources and/or minimize associated costs. Utility is the packaging of system resources, such as computation, storage and services, as a metered service. This model has the advantage of a low or no initial cost to acquire computer resources; instead, resources are essentially rented.

<span class="mw-page-title-main">Edge computing</span> Distributed computing paradigm

Edge computing is a distributed computing paradigm that brings computation and data storage closer to the sources of data. This is expected to improve response times and save bandwidth. Edge computing is an architecture rather than a specific technology, and a topology- and location-sensitive form of distributed computing.

Cloud storage is a model of computer data storage in which the digital data is stored in logical pools, said to be on "the cloud". The physical storage spans multiple servers, and the physical environment is typically owned and managed by a hosting company. These cloud storage providers are responsible for keeping the data available and accessible, and the physical environment secured, protected, and running. People and organizations buy or lease storage capacity from the providers to store user, organization, or application data.

<span class="mw-page-title-main">Server room</span> Room containing computer servers

A server room is a room, usually air-conditioned, devoted to the continuous operation of computer servers. An entire building or station devoted to this purpose is a data center.

<span class="mw-page-title-main">Computer cluster</span> Set of computers configured in a distributed computing system

A computer cluster is a set of computers that work together so that they can be viewed as a single system. Unlike grid computers, computer clusters have each node set to perform the same task, controlled and scheduled by software.

<span class="mw-page-title-main">Cloud computing</span> Form of shared Internet-based computing

Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Large clouds often have functions distributed over multiple locations, each of which is a data center. Cloud computing relies on sharing of resources to achieve coherence and typically uses a pay-as-you-go model, which can help in reducing capital expenses but may also lead to unexpected operating expenses for users.

<span class="mw-page-title-main">Aquasar</span> Supercomputer system from IBM Research

Aquasar is a supercomputer prototype created by IBM Labs in collaboration with ETH Zurich in Zürich, Switzerland and ETH Lausanne in Lausanne, Switzerland. While most supercomputers use air as their coolant of choice, the Aquasar uses hot water to achieve its great computing efficiency. Along with using hot water as the main coolant, an air-cooled section is also included to be used to compare the cooling efficiency of both coolants. The comparison could later be used to help improve the hot water coolant's performance. The research program was first termed to be: "Direct use of waste heat from liquid-cooled supercomputers: the path to energy saving, emission-high performance computers and data centers." The waste heat produced by the cooling system is able to be recycled back in the building's heating system, potentially saving money. Beginning in 2009, the three-year collaborative project was introduced and developed in the interest of saving energy and being environmentally-safe while delivering top-tier performance.

iDataCool is a high-performance computer cluster based on a modified IBM System x iDataPlex. The cluster serves as a research platform for cooling of IT equipment with hot water and efficient reuse of the waste heat. The project is carried out by the physics department of the University of Regensburg in collaboration with the IBM Research and Development Laboratory Böblingen and InvenSor. It is funded by the German Research Foundation (DFG), the German state of Bavaria, and IBM.

<span class="mw-page-title-main">Radiant heating and cooling</span> Category of HVAC technologies

Radiant heating and cooling is a category of HVAC technologies that exchange heat by both convection and radiation with the environments they are designed to heat or cool. There are many subcategories of radiant heating and cooling, including: "radiant ceiling panels", "embedded surface systems", "thermally active building systems", and infrared heaters. According to some definitions, a technology is only included in this category if radiation comprises more than 50% of its heat exchange with the environment; therefore technologies such as radiators and chilled beams are usually not considered radiant heating or cooling. Within this category, it is practical to distinguish between high temperature radiant heating, and radiant heating or cooling with more moderate source temperatures. This article mainly addresses radiant heating and cooling with moderate source temperatures, used to heat or cool indoor environments. Moderate temperature radiant heating and cooling is usually composed of relatively large surfaces that are internally heated or cooled using hydronic or electrical sources. For high temperature indoor or outdoor radiant heating, see: Infrared heater. For snow melt applications see: Snowmelt system.

A distributed file system for cloud is a file system that allows many clients to have access to data and supports operations on that data. Each data file may be partitioned into several parts called chunks. Each chunk may be stored on different remote machines, facilitating the parallel execution of applications. Typically, data is stored in files in a hierarchical tree, where the nodes represent directories. There are several ways to share files in a distributed architecture: each solution must be suitable for a certain type of application, depending on how complex the application is. Meanwhile, the security of the system must be ensured. Confidentiality, availability and integrity are the main keys for a secure system.

Fog computing or fog networking, also known as fogging, is an architecture that uses edge devices to carry out a substantial amount of computation, storage, and communication locally and routed over the Internet backbone.

References

  1. Jie Liu; Michel Goraczko; Sean James; Christian Belady; Jiakang Lu; Kamin Whitehouse (June 2011). "The Data Furnace: Heating Up with Cloud Computing" (PDF). Microsoft Research. Retrieved 30 December 2013.
  2. 1 2 3 Stross, Randall (2011-11-26). "Data furnaces could bring heat to homes". New York Times. Retrieved 30 December 2011.
  3. Pulley, Adam (28 January 2015). "How Hot Does A Data Furnace Heating System Need To Be?" (PDF). Green Processing. Archived from the original (PDF) on 6 July 2018. Retrieved 6 July 2018.
  4. "Worried About Being Cold this Winter? How About Installing a Data Furnace?". 2011-11-28. Retrieved 30 December 2011.
  5. "Germans get free heating from the cloud". DatacenterDynamics. Retrieved 2018-07-06.
  6. Velkova, Julia (1 December 2016). "Data that warms: Waste heat, infrastructural convergence and the computation traffic commodity". Big Data & Society. 3 (2): 205395171668414. doi: 10.1177/2053951716684144 .