Cycle Computing

Last updated
Cycle Computing
Type privately held company
Industry software
Founded2005
Headquarters,
United States
Area served
Worldwide
Key people
Jason Stowe (CEO)
Website www.cyclecomputing.com

Cycle Computing is a company that provides software for orchestrating computing and storage resources in cloud environments. The flagship product is CycleCloud, which supports Amazon Web Services, Google Compute Engine, Microsoft Azure, and internal infrastructure. The CycleCloud orchestration suite manages the provisioning of cloud infrastructure, orchestration of workflow execution and job queue management, automated and efficient data placement, full process monitoring and logging, within a secure process flow.

Contents

History

Cycle Computing was founded in 2005. [1] Its original offerings were based around the HTCondor scheduler and focused on maximizing the effectiveness of internal resources. Cycle Computing offered support for HTCondor as well as CycleServer, which provided metascheduling, reporting, and management tools for HTCondor resources. Early customers spanned a number of industries, including insurance, pharmaceutical, manufacturing, and academia.

With the advent of large public cloud offerings, Cycle Computing expanded its tools to allow customers to make use of dynamically provisioned cloud environments. Key technologies developed include the ability to validate that resources were correctly added in the cloud (patent awarded in 2015 [2] ), the ability to easily manage data placement and consistency, the ability to support multiple cloud providers within a single workflow, and other technologies.

On August 15, 2017, Microsoft announced its acquisition of Cycle Computing. [3]

Large runs

In April 2011, Cycle Computing announced “Tanuki”, a 10,000 core Amazon Web Services cluster used by Genentech. [4]

In September 2011, a Cycle Computing HPC cluster called Nekomata (Japanese for "Monster Cat") was renting out at $1279/hour, offering 30,472 processor cores with 27TB of memory and 2PB of storage. An unnamed pharmaceutical company used the cluster for 7 hours, paying $9000, for a molecular modeling task. [5] [6] [7]

In April 2012, Cycle Computing announced that, working in collaboration with scientific software-writing company Schrödinger, it had screened 21 million compounds in less than three hours using a 50,000-core cluster. [8]

In November 2013, Cycle Computing announced that, working in collaboration with scientific software-writing company Schrödinger, it had helped Mark Thompson, a professor of chemistry at the University of Southern California, sort through about 205,000 compounds to search for the right compound to build a new generation of inexpensive and highly efficient solar panels. The job took less than a day and cost $33,000 in total. The computing cluster used 156,000 cores spread across 8 regions and had a peak capacity of 1.21 petaFLOPS. [9] [10] [11] [12] [13]

In November 2014, Cycle Computing worked with a researcher at HGST to run a hard drive simulation workload. The computation would have taken over a month on internal resources, but completed in 7 hours running on 70,000 cores in Amazon Web Services, at a cost of less than $6,000. [14] [15]

In September 2015, Cycle Computing and the Broad Institute announced a 50,000 core cluster to run on Google Compute Engine. [16]

Related Research Articles

<span class="mw-page-title-main">Supercomputer</span> Type of extremely powerful computer

A supercomputer is a computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, there have existed supercomputers which can perform over 1017 FLOPS (a hundred quadrillion FLOPS, 100 petaFLOPS or 100 PFLOPS). For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013). Since November 2017, all of the world's fastest 500 supercomputers run on Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.

Grid computing is the use of widely distributed computer resources to reach a common goal. A computing grid can be thought of as a distributed system with non-interactive workloads that involve many files. Grid computing is distinguished from conventional high-performance computing systems such as cluster computing in that grid computers have each node set to perform a different task/application. Grid computers also tend to be more heterogeneous and geographically dispersed than cluster computers. Although a single grid can be dedicated to a particular application, commonly a grid is used for a variety of purposes. Grids are often constructed with general-purpose grid middleware software libraries. Grid sizes can be quite large.

<span class="mw-page-title-main">High-performance computing</span> Computing with supercomputers and clusters

High-performance computing (HPC) uses supercomputers and computer clusters to solve advanced computation problems.

HTCondor is an open-source high-throughput computing software framework for coarse-grained distributed parallelization of computationally intensive tasks. It can be used to manage workload on a dedicated cluster of computers, or to farm out work to idle desktop computers – so-called cycle scavenging. HTCondor runs on Linux, Unix, Mac OS X, FreeBSD, and Microsoft Windows operating systems. HTCondor can integrate both dedicated resources and non-dedicated desktop machines into one computing environment.

<span class="mw-page-title-main">Univa</span> Software company

Univa was a software company that developed workload management and cloud management products for compute-intensive applications in the data center and across public, private, and hybrid clouds, before being acquired by Altair Engineering in September 2020.

<span class="mw-page-title-main">Computer cluster</span> Set of computers configured in a distributed computing system

A computer cluster is a set of computers that work together so that they can be viewed as a single system. Unlike grid computers, computer clusters have each node set to perform the same task, controlled and scheduled by software.

<span class="mw-page-title-main">Cloud computing</span> Form of shared Internet-based computing

Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Large clouds often have functions distributed over multiple locations, each of which is a data center. Cloud computing relies on sharing of resources to achieve coherence and typically uses a pay-as-you-go model, which can help in reducing capital expenses but may also lead to unexpected operating expenses for users.

Windows HPC Server 2008, released by Microsoft on 22 September 2008, is the successor product to Windows Compute Cluster Server 2003. Like WCCS, Windows HPC Server 2008 is designed for high-end applications that require high performance computing clusters. This version of the server software is claimed to efficiently scale to thousands of cores. It includes features unique to HPC workloads: a new high-speed NetworkDirect RDMA, highly efficient and scalable cluster management tools, a service-oriented architecture (SOA) job scheduler, an MPI library based on open-source MPICH2, and cluster interoperability through standards such as the High Performance Computing Basic Profile (HPCBP) specification produced by the Open Grid Forum (OGF).

The National Center for Computational Sciences (NCCS) is a United States Department of Energy (DOE) Leadership Computing Facility that houses the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility charged with helping researchers solve challenging scientific problems of global interest with a combination of leading high-performance computing (HPC) resources and international expertise in scientific computing.

<span class="mw-page-title-main">SciNet Consortium</span> Scientific research group between the University of Toronto and local hospitals

SciNet is a consortium of the University of Toronto and affiliated Ontario hospitals. It has received funding from both the federal and provincial government, Faculties at the University of Toronto, and affiliated hospitals.

Techila Distributed Computing Engine is a commercial grid computing software product. It speeds up simulation, analysis and other computational applications by enabling scalability across the IT resources in user's on-premises data center and in the user's own cloud account. Techila Distributed Computing Engine is developed and licensed by Techila Technologies Ltd, a privately held company headquartered in Tampere, Finland. The product is also available as an on-demand solution in Google Cloud Launcher, the online marketplace created and operated by Google. According to IDC, the solution enables organizations to create HPC infrastructure without the major capital investments and operating expenses required by new HPC hardware.

<span class="mw-page-title-main">OpenStack</span> Cloud computing software

OpenStack is a free, open standard cloud computing platform. It is mostly deployed as infrastructure-as-a-service (IaaS) in both public and private clouds where virtual servers and other resources are made available to users. The software platform consists of interrelated components that control diverse, multi-vendor hardware pools of processing, storage, and networking resources throughout a data center. Users manage it either through a web-based dashboard, through command-line tools, or through RESTful web services.

Schrödinger, Inc. is an international scientific software company that specializes in developing computational tools and software for drug discovery and materials science.

Univa Grid Engine (UGE) is a batch-queuing system, forked from Sun Grid Engine (SGE). The software schedules resources in a data center applying user-configurable policies to help improve resource sharing and throughput by maximizing resource utilization. The product can be deployed to run on-premises, using IaaS cloud computing or in a hybrid cloud environment.

<span class="mw-page-title-main">Supercomputing in Europe</span> Overview of supercomputing in Europe

Several centers for supercomputing exist across Europe, and distributed access to them is coordinated by European initiatives to facilitate high-performance computing. One such initiative, the HPC Europa project, fits within the Distributed European Infrastructure for Supercomputing Applications (DEISA), which was formed in 2002 as a consortium of eleven supercomputing centers from seven European countries. Operating within the CORDIS framework, HPC Europa aims to provide access to supercomputers across Europe.

<span class="mw-page-title-main">Supercomputer architecture</span> Aspect of supercomputer

Approaches to supercomputer architecture have taken dramatic turns since the earliest systems were introduced in the 1960s. Early supercomputer architectures pioneered by Seymour Cray relied on compact innovative designs and local parallelism to achieve superior computational peak performance. However, in time the demand for increased computational power ushered in the age of massively parallel systems.

Google Compute Engine (GCE) is the Infrastructure as a Service (IaaS) component of Google Cloud Platform which is built on the global infrastructure that runs Google's search engine, Gmail, YouTube and other services. Google Compute Engine enables users to launch virtual machines (VMs) on demand. VMs can be launched from the standard images or custom images created by users. GCE users must authenticate based on OAuth 2.0 before launching the VMs. Google Compute Engine can be accessed via the Developer Console, RESTful API or command-line interface (CLI).

<span class="mw-page-title-main">Singularity (software)</span> Free, cross-platform and open-source computer program

Singularity is a free and open-source computer program that performs operating-system-level virtualization also known as containerization.

<span class="mw-page-title-main">Cerebras</span> American semiconductor company

Cerebras Systems is an American artificial intelligence company with offices in Sunnyvale, San Diego, Toronto, Tokyo and Bangalore. Cerebras builds computer systems for complex artificial intelligence deep learning applications.

Containerization is operating system-level virtualization or application-level virtualization over multiple network resources so that software applications can run in isolated user spaces called containers in any cloud or non-cloud environment, regardless of type or vendor.

References

  1. "Cycle Computing Nets Investment to Boost High-Performance Computing". Fortune. Retrieved 2021-01-30.
  2. "Method and system for automatically detecting and resolving infrastructure faults in cloud infrastructure".
  3. "Microsoft acquires high-performance computing startup Cycle Computing to improve Azure". TechCrunch. 15 August 2017. Retrieved 2021-01-30.
  4. "Cycle Computing fires up 10,000-core HPC cloud on EC2". The Register .
  5. Anthony, Sebastian (September 20, 2011). "Rent the world's 30th-fastest, 30,472-core supercomputer for $1,279 per hour". ExtremeTech . Retrieved January 26, 2014.
  6. "New CycleCloud HPC Cluster Is a Triple Threat: 30000 cores, $1279/Hour, & Grill monitoring GUI for Chef". Cycle Computing. September 19, 2011. Archived from the original on March 8, 2017. Retrieved January 26, 2014.
  7. Brodkin, Jon (September 20, 2011). "$1,279-per-hour, 30,000-core cluster built on Amazon EC2 cloud A supercomputer built on Amazon's cloud is used for pharma research". Ars Technica . Retrieved January 26, 2014.
  8. Darrow, Barb (April 19, 2012). "Cycle Computing spins up 50K core Amazon cluster". GigaOm . Retrieved January 26, 2014.
  9. "Back to the Future: 1.21 petaFLOPS(RPeak), 156,000-core CycleCloud HPC runs 264 years of Materials Science". Cycle Computing. November 12, 2013. Archived from the original on February 1, 2014. Retrieved January 26, 2014.
  10. Yirka, Bob (November 12, 2013). "Cycle Computing uses Amazon computing services to do work of supercomputer". Phys.org . Retrieved January 26, 2014.
  11. Darrow, Barb (November 12, 2013). "Cycle Computing once again showcases Amazon's high-performance computing potential". GigaOm . Retrieved January 26, 2014.
  12. Shankland, Stephen (November 12, 2013). "Supercomputing simulation employs 156,000 Amazon processor cores: To simulate 205,000 molecules as quickly as possible for a USC simulation, Cycle Computing fired up a mammoth amount of Amazon servers around the globe". CNet . Retrieved January 26, 2014.
  13. Brueckner, Rich (November 13, 2013). "Slidecast: How Cycle Computing Spun Up a Petascale CycleCloud". Inside HPC. Retrieved January 26, 2014.
  14. "HGST buys 70,000-core cloud HPC Cluster, breaks record, returns it 8 hours later" . Retrieved February 5, 2016.
  15. "Cycle Helps HGST Stand Up 70,000 Core AWS Cloud". 12 November 2014.
  16. "Google, Cycle Computing Pair for Broad Genomics Effort". 8 September 2015.