SciNet Consortium

Last updated
First wave of SciNet computer installation process SciNet2009.JPG
First wave of SciNet computer installation process
SciNet CTO, Chris Loken (rightmost), at a data center discussion panel. IBMPortableModularDataCenter6.jpg
SciNet CTO, Chris Loken (rightmost), at a data center discussion panel.

SciNet is a consortium of the University of Toronto and affiliated Ontario hospitals. It has received funding from both the federal and provincial government, Faculties at the University of Toronto, and affiliated hospitals.

Contents

It is one of seven regional High Performance Computing consortia across Canada and is the most powerful university HPC system outside of the US. As of November 2008, the partially constructed systems were already ranked at #53 on the Top 500 List. It is also the only Canadian HPC in top one hundred of the list. The parallel systems were anticipated to rank around #50 and #25 upon completion in June 2009. The TOP500 list for June 2009 ranked the GPC iDataplex system at #16, while the TCS dropped to #80.

The SciNet offices are based on the St. George street campus, however, to accommodate the large floor space and power needs, the datacentre facility is housed in a warehouse about 30 km north of campus in Vaughan.

At the core of SciNet research are six key areas of study: Astronomy and Astrophysics, Aerospace and Biomedical Engineering, High Energy Particle Physics, Integrative Computational Biology, Planetary Physics, and Theoretical Chemical Physics.

History

SciNet was initially formed in the fall of 2004 following an agreement between the Canadian high-performance computing community to develop a response to the newly created National Platform Fund. The community felt that funding from the NPF would enable the development of a collective national capability in HPC. The Canadian HPC community was successful in its NPF proposal and SciNet was awarded a portion of that funding.

SciNet finalized its contract with IBM to build the system in July 2008 and the formal announcement was August 14, 2008. [1] On Thursday, June 18, 2009, the most powerful supercomputer in Canada went online and would have ranked twelfth most powerful computer worldwide had it been completed six months earlier. [2]

Specifications

The SciNet has two compute clusters that are optimized for different types of computing:

General Purpose Cluster

The General Purpose Cluster consists of 3,780 IBM System x iDataPlex dx360 M3 nodes, each with 2 quad-core Intel Nehalem (Xeon 5540) processor running at 2.53 GHz, totaling 30,240 cores in 45 racks. (An iDataPlex rack cabinet provides 84 rack units of space. [3] ) All nodes are connected with Gigabit Ethernet, and DDR InfiniBand is used additionally in 864 nodes to provide high-speed and low-latency communication for message passing applications. [4] The computer will use the same amount of energy which could be used to power four thousand homes, and is water-cooled. To utilize the cold Canadian climate, the system is notified when external air goes below a certain temperature, at which time the chiller switches over to use the "free-air" cooling available. SciNet, IBM Corp and Compute Canada collaborated on the supercomputer venture. [2] [5] [6] The new computer system at U of T's SciNet is the largest Intel processor based IBM installation globally. [7]

Data center

The computer room itself is 3,000 square feet (280 m2) on a raised floor. It has a 735-ton chiller and cooling towers for “free-air” cooling. A significant research area that will be addressed using the SciNet machines is that of climate change and global warming, which is why creating one of the greenest datacenters in the world was of key importance in this project. A traditional datacenter generally uses 33% of the energy going into its centre for cooling and other non-computing power consumption; however, SciNet and IBM have successfully created a centre that uses less than 20% towards these areas.

Partners

Founding Institution
Affiliated Hospitals

Common uses

The U of T supercomputer which can perform 300 trillion calculations per second will be used for highly calculation-intensive tasks such as problems involving quantum mechanical physics, weather forecasting, climate research, climate change models, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), physical simulations (such as simulation of the Big Bang theory in conjunction with the Large Hadron Collider (LHC) in CERN, Geneva which will produce cataclysmic conditions that will mimic the beginning of time, and the U of T supercomputer will examine the particle collisions. [2] [5] [6] Part of the collaboration with LHC will be to answer questions about why matter has mass and what comprises the Universe's mass? Additional areas of research will be models of greenhouse gas-induced global warming and the effect on Arctic sea ice. The international ATLAS project will be explored by the new supercomputer to discover forces which govern the universe. [7]

Related Research Articles

<span class="mw-page-title-main">Supercomputer</span> Type of extremely powerful computer

A supercomputer is a computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, supercomputers have existed which can perform over 1017 FLOPS (a hundred quadrillion FLOPS, 100 petaFLOPS or 100 PFLOPS). For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013). Since November 2017, all of the world's fastest 500 supercomputers run on Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.

In computing, floating point operations per second is a measure of computer performance, useful in fields of scientific computations that require floating-point calculations. For such cases, it is a more accurate measure than measuring instructions per second.

<span class="mw-page-title-main">IBM Blue Gene</span> Series of supercomputers by IBM

Blue Gene was an IBM project aimed at designing supercomputers that can reach operating speeds in the petaFLOPS (PFLOPS) range, with low power consumption.

<span class="mw-page-title-main">Lenovo System x</span> IBM server computer

System x is a line of x86 servers produced by IBM, and later by Lenovo, as a sub-brand of IBM's System brand, alongside IBM Power Systems, IBM System z and IBM System Storage. In addition, IBM System x was the main component of the IBM System Cluster 1350 solution.

<span class="mw-page-title-main">Quadrics (company)</span>

Quadrics was a supercomputer company formed in 1996 as a joint venture between Alenia Spazio and the technical team from Meiko Scientific. They produced hardware and software for clustering commodity computer systems into massively parallel systems. Their highpoint was in June 2003 when six out of the ten fastest supercomputers in the world were based on Quadrics' interconnect. They officially closed on June 29, 2009.

<span class="mw-page-title-main">MareNostrum</span> Supercomputer in the Barcelona Supercomputing Center

MareNostrum is the main supercomputer in the Barcelona Supercomputing Center. It is the most powerful supercomputer in Spain, one of thirteen supercomputers in the Spanish Supercomputing Network and one of the seven supercomputers of the European infrastructure PRACE.

The Texas Advanced Computing Center (TACC) at the University of Texas at Austin, United States, is an advanced computing research center that is based on comprehensive advanced computing resources and supports services to researchers in Texas and across the U.S. The mission of TACC is to enable discoveries that advance science and society through the application of advanced computing technologies. Specializing in high performance computing, scientific visualization, data analysis & storage systems, software, research & development and portal interfaces, TACC deploys and operates advanced computational infrastructure to enable the research activities of faculty, staff, and students of UT Austin. TACC also provides consulting, technical documentation, and training to support researchers who use these resources. TACC staff members conduct research and development in applications and algorithms, computing systems design/architecture, and programming tools and environments.

Sun Constellation System is an open petascale computing environment introduced by Sun Microsystems in 2007.

The Green500 is a biannual ranking of supercomputers, from the TOP500 list of supercomputers, in terms of energy efficiency. The list measures performance per watt using the TOP500 measure of high performance LINPACK benchmarks at double-precision floating-point format.

<span class="mw-page-title-main">Pleiades (supercomputer)</span> NASA supercomputer at Ames Research Center/NAS

Pleiades is a petascale supercomputer housed at the NASA Advanced Supercomputing (NAS) facility at NASA's Ames Research Center located at Moffett Field near Mountain View, California. It is maintained by NASA and partners Hewlett Packard Enterprise and Intel.

<span class="mw-page-title-main">PERCS</span>

PERCS is IBM's answer to DARPA's High Productivity Computing Systems (HPCS) initiative. The program resulted in commercial development and deployment of the Power 775, a supercomputer design with extremely high performance ratios in fabric and memory bandwidth, as well as very high performance density and power efficiency.

<span class="mw-page-title-main">National Computational Infrastructure</span> HPC facility in Canberra, Australia

The National Computational Infrastructure is a high-performance computing and data services facility, located at the Australian National University (ANU) in Canberra, Australian Capital Territory. The NCI is supported by the Australian Government's National Collaborative Research Infrastructure Strategy (NCRIS), with operational funding provided through a formal collaboration incorporating CSIRO, the Bureau of Meteorology, the Australian National University, Geoscience Australia, the Australian Research Council, and a number of research intensive universities and medical research institutes.

<span class="mw-page-title-main">National Computer Center for Higher Education (France)</span>

The National Computer Center for Higher Education, based in Montpellier, is a public institution under the supervision of the Ministry of Higher Education and Research (MESR) created by a decree issued in 1999. CINES offers IT services for public research in France. It is one of the major national centers for computing power supply for research in France.

<span class="mw-page-title-main">Tsubame (supercomputer)</span> Series of supercomputers

Tsubame is a series of supercomputers that operates at the GSIC Center at the Tokyo Institute of Technology in Japan, designed by Satoshi Matsuoka.

<span class="mw-page-title-main">Supercomputing in Europe</span> Overview of supercomputing in Europe

Several centers for supercomputing exist across Europe, and distributed access to them is coordinated by European initiatives to facilitate high-performance computing. One such initiative, the HPC Europa project, fits within the Distributed European Infrastructure for Supercomputing Applications (DEISA), which was formed in 2002 as a consortium of eleven supercomputing centers from seven European countries. Operating within the CORDIS framework, HPC Europa aims to provide access to supercomputers across Europe.

The PRIMEHPC FX10 is a supercomputer designed and manufactured by Fujitsu. Announced on 7 November 2011 at the Supercomputing Conference, the PRIMEHPC FX10 is an improved and commercialized version of the K computer, which was the first supercomputer to obtain more than 10 PFLOPS on the LINPACK benchmark. In its largest configuration, the PRIMEHPC FX10 has a peak performance 23.2 PFLOPS, power consumption of 22.4 MW, and a list price of US$655.4 million. It was succeeded by the PRIMEHPC FX100 with SPARC64 XIfx processors in 2015.

<span class="mw-page-title-main">Arctur-1</span>

Arctur-1 was a supercomputer located in Slovenia which is used by scientific and technical users in technologically intensive industries and research. In 2017 it was replaced by Arctur-2.

iDataCool is a high-performance computer cluster based on a modified IBM System x iDataPlex. The cluster serves as a research platform for cooling of IT equipment with hot water and efficient reuse of the waste heat. The project is carried out by the physics department of the University of Regensburg in collaboration with the IBM Research and Development Laboratory Böblingen and InvenSor. It is funded by the German Research Foundation (DFG), the German state of Bavaria, and IBM.

<span class="mw-page-title-main">Cray XC40</span> Supercomputer manufactured by Cray

The Cray XC40 is a massively parallel multiprocessor supercomputer manufactured by Cray. It consists of Intel Haswell Xeon processors, with optional Nvidia Tesla or Intel Xeon Phi accelerators, connected together by Cray's proprietary "Aries" interconnect, stored in air-cooled or liquid-cooled cabinets. The XC series supercomputers are available with the Cray DataWarp applications I/O accelerator technology.

Inspur Server Series is a series of server computers introduced in 1993 by Inspur, an information technology company, and later expanded to the international markets. The servers were likely among the first originally manufactured by a Chinese company. It is currently developed by Inspur Information and its San Francisco-based subsidiary company - Inspur Systems, both Inspur's spinoff companies. The product line includes GPU Servers, Rack-mounted servers, Open Computing Servers and Multi-node Servers.

References

  1. "U of T to acquire Canada's most powerful supercomputer from IBM". University of Toronto. 2008-08-14. Retrieved 2009-05-08.
  2. 1 2 3 "Toronto team completes Canada's most powerful supercomputer". CBC News. June 18, 2009. Retrieved 2009-06-18.
  3. Implementing an IBM System x iDataPlex Solution Archived 2012-01-11 at the Wayback Machine
  4. SciNet: Lessons Learned from Building a Power-efficient Top-20 System and Data Centre
  5. 1 2 Hall, Joseph (June 18, 2009). "U of T supercomputer probes origins of the universe". The Star. Retrieved 2009-06-18.
  6. 1 2 "University of Toronto's Supercomputer Goes Online Thursday". All Headline News. June 18, 2009. Archived from the original on June 24, 2009. Retrieved 2009-06-18.
  7. 1 2 "IBM Supercomputer at University of Toronto Is Canada's Most Powerful". Newswire. CNW Group Ltd. June 18, 2009. Retrieved 2009-06-18.