Asperitas Microdatacenter

Last updated

A microdatacenter is a small, self contained data center consisting of computing, storage, networking, power and cooling. Micro data centers may employ water cooling to achieve compactness, low component count, low cost and high energy efficiency. Its small size allows decentralised deployment in places where traditional data centers cannot go, for instance edge computing for Internet of things.

The Internet of Things (IoT) is a system of interrelated computing devices, mechanical and digital machines, objects, animals or people that are provided with unique identifiers (UIDs) and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction.

Distributed Micro Edge Data centers

Micro edge data centers (10-200 kW) function as forward locations of the core data centers. The edge nodes are continuously connected and/or replicated with the core datacenters and several strategic other edge nodes. This provides constant availability through geo-redundancy. [1]

By making information available in multiple locations at the same time, it becomes easy to exchange between different physical structures when interacting with the information. The capacity of overhead installations can be minimised to allow only for normal operation and a shutdown phase in case of emergency while active data processes are moved to a different facility.

The micro edge nodes are small locations with minimised overhead installations. They will have simplified configurations which consists of a small data floor, switchboard and energy delivery. Often without redundancy in power or cooling infrastructure (there is a significant thermal buffer with Immersed Computing®), but with sufficient sustainable Li-ion battery power (i.e. Tesla Powerpack) to allow for replication and shutdown. The facilities are optimally equipped with liquid cooling technologies to reduce overhead installations. This allows these facilities to become enclosed air environments which prevents environmental impact like noise or exterior installations. The liquid infrastructure is cooled with any present external cooling strategy which is available.

Edge management

The management of the distributed datacenter model, is possible through the emergence of software platforms providing ubiquitous management of data, network and computation capacities. These kind of platforms already exists for traditional centralised infrastructure, but new challenges emerge from this hybrid and distributed architecture. Closer to the end users, edge nodes in urban areas have new constraints in terms of energy consumption and heat production. Containerisation, through technologies like Docker [2] or Singularity, [3] opens great opportunities to make applications more scalable, flexible and less dependent on the infrastructure. Many frameworks appeared recently (Swarm, Kubernetes [4] ) to manage decentralised clusters. Some of them also integrate energy and heat management by design like “Q.ware [5] developed by Qarnot computing. [6] This positive dynamic in the software industry is an essential pillar to enabling core datacenters and edge nodes with an integrated architecture.

Network optimization

The use of core data centers and edge nodes allows for network optimisation by preventing long distance transport of raw (large) data and allowing the processing of data close to the source. By bringing data which is in high demand closer to the end user (caching), high volume data transmission across long distance backbones is greatly reduced, as well as latency which is a critical factor for delivering good end user experience.

Energy grid balancing

One of the limitations for datacenter growth is the capacity of the existing power grid. In most areas in the world, the power grid was designed and implemented long before data centers even existed. There are numerous areas where the power grid will reach its maximum capacity within the next 3–5 years. The traditional datacenter approach causes high loads on very specific parts of the grid. By applying the distributed data center model, the power grid is more balanced and the impact of expansion greatly reduced.

Energy production

By focusing on the reuse of energy, each edge node can reject its thermal energy directly into a reusable heat infrastructure (district heating/heat storage), building heating (hospitals/industry), water heating (hospitals/zoos) or other heat users. The core data centers become large suppliers of district heating networks or will be connected to 24/7 industries which require constant heating within a large scale industrial process.

See also

Related Research Articles

A colocation centre or "carrier hotel", is a type of data centre where equipment, space, and bandwidth are available for rental to retail customers. Colocation facilities provide space, power, cooling, and physical security for the server, storage, and networking equipment of other firms and also connect them to a variety of telecommunications and network service providers with a minimum of cost and complexity.

Data center Building or room where computer servers and related equipment are operated

A data center or data centre is a building, dedicated space within a building, or a group of buildings used to house computer systems and associated components, such as telecommunications and storage systems.

Cogeneration simultaneous generation of electricity, and/or heating, or cooling, or industrial chemicals

Cogeneration or combined heat and power (CHP) is the use of a heat engine or power station to generate electricity and useful heat at the same time. Trigeneration or combined cooling, heat and power (CCHP) refers to the simultaneous generation of electricity and useful heating and cooling from the combustion of a fuel or a solar heat collector. The terms cogeneration and trigeneration can be also applied to the power systems generating simultaneously electricity, heat, and industrial chemicals – e.g., syngas or pure hydrogen.

NetApp company

NetApp, Inc. is a hybrid cloud data services and data management company headquartered in Sunnyvale, California. It has ranked in the Fortune 500 since 2012. Founded in 1992 with an IPO in 1995, NetApp offers hybrid cloud data services for management of applications and data across cloud and on-premises environments.

Greek Research and Technology Network

The Greek Research and Technology Network or GRNET is the national research and education network of Greece. GRNET S.A. provides Internet connectivity, high-quality e-Infrastructures and advanced services to the Greek Educational, Academic and Research community, aiming at minimizing the digital divide and at ensuring equal participation of its members in the global Society of Knowledge. Additionally, GRNET develops digital applications that ensure resource optimization for the Greek State, modernize public functional structures and procedures, and introduce new models of cooperation between public bodies, research and education communities, citizens and businesses. GRnet's executives have been contributors of or occupied board positions in organisations including GÉANT, TERENA, DANTE,GR-IX, Euro-IX,, RIPE NCC. GRNET provides advanced services to the following sectors: Education, Research, Health, Culture. GRNET supports all Universities, Technological Education Institutes, Research Centers and over 9,500 schools via the Greek School Network a population of more than one million people. Video presentations of some of the services are available in Pyxida.

National Energy Research Scientific Computing Center supercomputer facility operated by the US Department of Energy in Berkeley, California

The National Energy Research Scientific Computing Center, or NERSC, is a high performance computing (supercomputer) user facility operated by Lawrence Berkeley National Laboratory for the United States Department of Energy Office of Science. As the mission computing center for the Office of Science, NERSC houses high performance computing and data systems used by 7,000 scientists at national laboratories and universities around the country. NERSC's newest and largest supercomputer is Cori, which was ranked 5th on the TOP500 list of world's fastest supercomputers in November 2016. NERSC is located on the main Berkeley Lab campus in Berkeley, California.

Server room Room dedicated to the operation of computer servers

A server room is a room, usually air-conditioned, devoted to the continuous operation of computer servers. An entire building or station devoted to this purpose is a data center.

Dynamic Infrastructure is an information technology concept related to the design of data centers, whereby the underlying hardware and software can respond dynamically and more efficiently to changing levels of demand. In other words, data center assets such as storage and processing power can be provisioned to meet surges in user's needs. The concept has also been referred to as Infrastructure 2.0 and Next Generation Data Center.

Power usage effectiveness (PUE) is a ratio that describes how efficiently a computer data center uses energy; specifically, how much energy is used by the computing equipment.

The BDR Thermea Group is a European manufacturer of domestic and industrial heating appliances. The firm was created by the merger of Baxi and De Dietrich Remeha in 2009. Based in Apeldoorn, the Netherlands BDR Thermea provides heating and hot water products for UK, France, Germany, Spain, The Netherlands and Italy and has strong positions in the rapidly growing markets of Eastern Europe, Turkey, Russia, North America and China. In total BDR Thermea operates in more than 70 countries worldwide.

HP Performance Optimized Datacenter portable data center manufactured and marketed by HP, built into a standard shipping container

The HP Performance Optimized Datacenter (POD) is a range of three modular data centers manufactured by HP.

iDataCool is a high-performance computer cluster based on a modified IBM System x iDataPlex. The cluster serves as a research platform for cooling of IT equipment with hot water and efficient reuse of the waste heat. The project is carried out by the physics department of the University of Regensburg in collaboration with the IBM Research and Development Laboratory Böblingen and InvenSor. It is funded by the German Research Foundation (DFG), the German state of Bavaria, and IBM.

NCAR-Wyoming Supercomputing Center research facility

The NCAR-Wyoming Supercomputing Center (NWSC) is a high-performance computing (HPC) and data archival facility located in Cheyenne, Wyoming that provides advanced computing services to researchers in the Earth system sciences.

Kubernetes Software to manage containers on a server-cluster. Its primary goal is a system for building, running and managing distributed systems

Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management. It was originally designed by Google, and is now maintained by the Cloud Native Computing Foundation. It aims to provide a "platform for automating deployment, scaling, and operations of application containers across clusters of hosts". It works with a range of container tools, including Docker. Many cloud services offer a Kubernetes-based platform or infrastructure as a service on which Kubernetes can be deployed as a platform-providing service. Many vendors also provide their own branded Kubernetes distributions.

Mirantis

Mirantis Inc. is a Campbell, California, based B2B cloud computing services company. It focuses on the development and support of Kubernetes and OpenStack. The company was founded in 1999 by Alex Freedland and Boris Renski. It was one of the founding members of the OpenStack Foundation, a non-profit corporate entity established in September, 2012 to promote OpenStack software and its community.

Temperature chaining can mean temperature, thermal or energy chaining or cascading.

TiDB is an open-source NewSQL database that supports Hybrid Transactional and Analytical Processing (HTAP) workloads. It is MySQL compatible and can provide horizontal scalability, strong consistency, and high availability. It is developed and supported primarily by PingCAP, Inc. and licensed under Apache 2.0. TiDB drew its initial design inspiration from Google’s Spanner and F1 papers.

References

  1. mmacy. "Data replication in Azure Storage". docs.microsoft.com. Retrieved 2017-07-25.
  2. "Docker". Docker. Retrieved 2017-07-25.
  3. "Singularity | Singularity". singularity.lbl.gov. Retrieved 2017-07-25.
  4. "Kubernetes". Kubernetes. Retrieved 2017-07-25.
  5. "Qarnot - The first computing heater for smart buildings". www.qarnot.com. Retrieved 2017-07-25.
  6. Computing, Qarnot. "Qarnot Computing". compute.qarnot.com. Retrieved 2017-07-25.