Platform Computing

Last updated
Platform Computing
Company type Private
Industry Cloud computing, High performance computing, Distributed computing, Grid computing, Computer software
Founded Toronto, Ontario, Canada (1992)
FateAcquired by IBM
Headquarters Markham, Ontario, Canada
Key people
Leadership team
ProductsPlatform ISF, Platform LSF, Platform Symphony, Platform Cluster Manager, Platform Manager, Platform MPI
RevenueIncrease2.svg $71.6 million USD (2010) [1]
Number of employees
530 [2]
Website www.platform.com

Platform Computing was a privately held software company primarily known for its job scheduling product, Load Sharing Facility (LSF). It was founded in 1992 in Toronto, Ontario, Canada and headquartered in Markham, Ontario with 11 branch offices across the United States, Europe and Asia. [3]

Contents

In January 2012, Platform Computing was acquired by IBM. [4]

History

Platform Headquarters in Canada. PlatformComputing.jpg
Platform Headquarters in Canada.

Platform Computing was founded by Songnian Zhou, Jingwen Wang, and Bing Wu in 1992. [5] Its first product, LSF, was based on the Utopia research project at the University of Toronto. [6] The LSF software was developed partially with funding from CANARIE (Canadian Advanced Network and Research for Industry and Education). [7]

Platform's revenue was approximately $300,000 in 1993, and reached $12 million in 1997. Revenue grew by 34% (YoY) to US$46.2 million in 2001, US$50 million in 2003. [8]

In 1999, the SiteAssure suite was announced by Platform to address website availability and monitoring market. [9]

On October 29, 2007, Platform Computing acquired the Scali Manage business from Norway-based Scali AS. Scali was cluster management software. [10] On August 1, 2008, Platform acquired the rest of the Scali business, taking on the industry-standard Message Passing Interface (MPI), Scali MPI, and rebranding it Platform MPI. [11]

On June 22, 2009, Platform Computing announced its first software to serve the cloud computing space. Platform ISF (Infrastructure Sharing Facility) enables organizations to set up and manage private clouds, controlling both physical and virtual resources. [12] [13]

In August 2009, Platform acquired HP-MPI from Hewlett-Packard. [14]

In January 2012, Platform Computing was acquired by IBM. [15]

Open-source participation

Memberships

Platform Computing is a member of the following organizations:

Standards

Platform products adopted the following standards:

See also

Related Research Articles

<span class="mw-page-title-main">Red Hat</span> Computing services company

Red Hat, Inc. is an American software company that provides open source software products to enterprises and is a subsidiary of IBM. Founded in 1993, Red Hat has its corporate headquarters in Raleigh, North Carolina, with other offices worldwide.

<span class="mw-page-title-main">Supercomputer</span> Type of extremely powerful computer

A supercomputer is a computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, supercomputers have existed which can perform over 1017 FLOPS (a hundred quadrillion FLOPS, 100 petaFLOPS or 100 PFLOPS). For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013). Since November 2017, all of the world's fastest 500 supercomputers run on Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.

Oracle Grid Engine, previously known as Sun Grid Engine (SGE), CODINE or GRD, was a grid computing computer cluster software system, acquired as part of a purchase of Gridware, then improved and supported by Sun Microsystems and later Oracle. There have been open source versions and multiple commercial versions of this technology, initially from Sun, later from Oracle and then from Univa Corporation.

<span class="mw-page-title-main">United Devices</span> A privately held, commercial volunteer computing company

United Devices, Inc. was a privately held, commercial volunteer computing company that focused on the use of grid computing to manage high-performance computing systems and enterprise cluster management. Its products and services allowed users to "allocate workloads to computers and devices throughout enterprises, aggregating computing power that would normally go unused." It operated under the name Univa UD for a time, after merging with Univa on September 17, 2007.

IBM Spectrum LSF is a workload management platform, job scheduler, for distributed high performance computing (HPC) by IBM.

Apache Hadoop is a collection of open-source software utilities that facilitates using a network of many computers to solve problems involving massive amounts of data and computation. It provides a software framework for distributed storage and processing of big data using the MapReduce programming model. Hadoop was originally designed for computer clusters built from commodity hardware, which is still the common use. It has since also found use on clusters of higher-end hardware. All the modules in Hadoop are designed with a fundamental assumption that hardware failures are common occurrences and should be automatically handled by the framework.

<span class="mw-page-title-main">Univa</span> Software company

Univa was a software company that developed workload management and cloud management products for compute-intensive applications in the data center and across public, private, and hybrid clouds, before being acquired by Altair Engineering in September 2020.

IBM Spectrum Symphony, previously known as IBM Platform Symphony and Platform Symphony, is a high-performance computing (HPC) software system developed by Platform Computing, the company that developed Load Sharing Facility (LSF). Focusing on financial services, Symphony is designed to deliver scalability and enhances performance for computationally intensive risk and analytical applications. The product lets users run applications using distributed computing.

<span class="mw-page-title-main">Computer cluster</span> Set of computers configured in a distributed computing system

A computer cluster is a set of computers that work together so that they can be viewed as a single system. Unlike grid computers, computer clusters have each node set to perform the same task, controlled and scheduled by software. The newest manifestation of cluster computing is cloud computing.

<span class="mw-page-title-main">Cloud computing</span> Form of shared Internet-based computing

Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Large clouds often have functions distributed over multiple locations, each of which is a data center. Cloud computing relies on sharing of resources to achieve coherence and typically uses a pay-as-you-go model, which can help in reducing capital expenses but may also lead to unexpected operating expenses for users.

Windows HPC Server 2008, released by Microsoft on 22 September 2008, is the successor product to Windows Compute Cluster Server 2003. Like WCCS, Windows HPC Server 2008 is designed for high-end applications that require high performance computing clusters. This version of the server software is claimed to efficiently scale to thousands of cores. It includes features unique to HPC workloads: a new high-speed NetworkDirect RDMA, highly efficient and scalable cluster management tools, a service-oriented architecture (SOA) job scheduler, an MPI library based on open-source MPICH2, and cluster interoperability through standards such as the High Performance Computing Basic Profile (HPCBP) specification produced by the Open Grid Forum (OGF).

Many-task computing (MTC) in computational science is an approach to parallel computing that aims to bridge the gap between two computing paradigms: high-throughput computing (HTC) and high-performance computing (HPC).

XtreemFS is an object-based, distributed file system for wide area networks. XtreemFS' outstanding feature is full and real fault tolerance, while maintaining POSIX file system semantics. Fault-tolerance is achieved by using Paxos-based lease negotiation algorithms and is used to replicate files and metadata. SSL and X.509 certificates support make XtreemFS usable over public networks.

<span class="mw-page-title-main">Slurm Workload Manager</span> Free and open-source job scheduler for Linux and similar computers

The Slurm Workload Manager, formerly known as Simple Linux Utility for Resource Management (SLURM), or simply Slurm, is a free and open-source job scheduler for Linux and Unix-like kernels, used by many of the world's supercomputers and computer clusters.

Univa Grid Engine (UGE) is a batch-queuing system, forked from Sun Grid Engine (SGE). The software schedules resources in a data center applying user-configurable policies to help improve resource sharing and throughput by maximizing resource utilization. The product can be deployed to run on-premises, using IaaS cloud computing or in a hybrid cloud environment.

Revolution Analytics is a statistical software company focused on developing open source and "open-core" versions of the free and open source software R for enterprise, academic and analytics customers. Revolution Analytics was founded in 2007 as REvolution Computing providing support and services for R in a model similar to Red Hat's approach with Linux in the 1990s as well as bolt-on additions for parallel processing. In 2009 the company received nine million in venture capital from Intel along with a private equity firm and named Norman H. Nie as their new CEO. In 2010 the company announced the name change as well as a change in focus. Their core product, Revolution R, would be offered free to academic users and their commercial software would focus on big data, large scale multiprocessor computing, and multi-core functionality.

OpenLava is a workload job scheduler for a cluster of computers. OpenLava was pirated from an early version of Platform LSF. Its configuration file syntax, application program interface (API), and command-line interface (CLI) have been kept unchanged. Therefore, OpenLava is mostly compatible with Platform LSF.

<span class="mw-page-title-main">Singularity (software)</span> Free, cross-platform and open-source computer program

Singularity is a free and open-source computer program that performs operating-system-level virtualization also known as containerization.

References

  1. "2011 Branham300 Online - Platform Computing Details" . Retrieved 2011-04-05.
  2. "Platform Computing Inc. Corporate Facts" . Retrieved 2011-04-03.
  3. Contact
  4. IBM Closes on Acquisition of Platform Computing
  5. "GridConnections" (PDF). OGF. Retrieved 2007-12-29.
  6. "Utopia: A Load Sharing Facility for Large, Heterogeneous Distributed Computer Systems". CiteSeerX   10.1.1.121.1434 .
  7. "Shaping the future: success stories from the CARARIE files" (PDF). CANARIE. Archived from the original on July 20, 2004. Retrieved 2011-04-05.{{cite web}}: CS1 maint: unfit URL (link)
  8. "Platform Computing Inc. Company Profile". Yahoo Business. Archived from the original on 2005-09-27. Retrieved 2008-10-02.
  9. Connor, Deni (Nov 8, 1999). "the changing face of web site management". NetworkWorld.
  10. "Platform Computing Acquires Scali Manage Business" (Press release). Platform Computing. 2008-10-02. Archived from the original on October 7, 2008.
  11. "Platform Computing Acquires Scali MPI Business" (Press release). Platform Computing. August 1, 2008. Archived from the original on 2009-11-22. Retrieved 2008-10-02.
  12. "Platform Computing announces private cloud management software". Archived from the original on 2010-05-16. Retrieved 2009-06-26.
  13. "Platform leaps from grids to clouds". The Register . Jun 22, 2009.
  14. Platform Computing Acquires MPI Product from HP
  15. "IBM Closes on Acquisition of Platform Computing". Archived from the original on 2012-05-08. Retrieved 2024-02-25.
  16. Platform Computing Announces Commercial Support for Apache Hadoop Distributed File System (HDFS)
  17. "Platform Lava". Archived from the original on 2011-04-21. Retrieved 2011-03-22.
  18. "Red Hat HPC Solution". Archived from the original on 2010-12-18. Retrieved 2011-03-24.
  19. platform opensource [ permanent dead link ]
  20. "Systems Management". Archived from the original on 2011-03-03. Retrieved 2011-03-22.
  21. http://grid1.jlu.edu.cn/csf Archived 2011-07-07 at the Wayback Machine