Penguin Computing

Last updated
Penguin Computing
Founded1998
FounderSam Ockman
Headquarters,
United States
Area served
North America
OwnerSMART Global Holdings
Number of employees
100–200
Website www.penguincomputing.com OOjs UI icon edit-ltr-progressive.svg

Penguin Computing is an American private supplier of enterprise, artificial intelligence (AI), high-performance computing (HPC), software-defined storage, and cloud computing solutions[ buzzword ] in North America and is based in Fremont, California.

Contents

The company's products include servers, computer clusters, networking components, digital storage, software solutions[ buzzword ] and a HPC cloud. Penguin Computing started as a Linux server company and now works on the design, engineering, integration, and delivery of solutions that are based on open architectures and non-proprietary components from a variety of Original equipment manufacturer (OEM) providers. Penguin Computing was an early contributor to the Open Compute Project (OCP).

High performance clusters

Penguin Computing operated as a Linux server company until it acquired in 2003 Scyld Software, a leader in Beowulf cluster management software. [1] Penguin Computing is based in Fremont, California. [2] The company's early software solutions[ buzzword ] were offered under its Scyld brand and included Scyld ClusterWare for cluster provisioning and management, as well as the Scyld Cloud Manager for cloud-enabled HPC environments.

In 2015, Penguin Computing was awarded a contract with the U.S. Department of Energy’s National Nuclear Security Administration (NNSA) tri-laboratory Commodity Technology Systems program, or CTS-1. [3] Under the $39 million contract, Penguin Computing provided over 7 petaFLOPS of computing power at Los Alamos National Laboratory, Sandia National Laboratories, and Lawrence Livermore National Laboratory. Coincident with this contract win, Penguin Computing built up a division to focus specifically on the federal HPC market. [4]

The CTS-1 contract was one of the first and largest deployments of Intel's Omni-Path high-performance communications architecture. [5] This resulted in Penguin Computing being awarded with Intel's "Partner of the Year - HPC Technical Computing" award. [6]

Open Compute Project (OCP)

Penguin Computing was an early contributor to the Open Compute Project (OCP), is a member of the project foundation, and one of a limited number of authorized OCP providers. [7] In November 2015 Penguin Computing announced the developed of its Tundra Extreme Scale (Tundra ES) product line with the intention of applying the findings of the OCP to high performance computing. [8] In 2020, Penguin Computing announced the new Tundra AP line of servers, which support Intel Server System compute modules in an OCP form factor.

HPC cloud computing

In 2009 Penguin Computing launched Penguin Computing On-Demand (POD) which offers high performance computing (HPC) cloud computing. The POD cloud was one of the first remote HPC services offered on a pay-as-you-go monthly basis. Like its high performance clusters, the POD cloud is a bare-metal compute model to execute code, but each user is given virtualized login node. Penguin Computing offers users more than 150 pre-installed commercial and open source applications. POD computing nodes are connected via nonvirtualized 10 Gbit/s Ethernet or QDR InfiniBand networks. The POD Cloud data center has redundant Internet links and user connectivity ranging from 50 Mbit/s to 1 Gbit/s. [9]

At POD's launch Penguin Computing's CEO Charles Wuischpard contended that because of the performance overhead from virtualization, other clouds were not suited to HPC and that the computing nodes allocated to customers may be far apart, causing latency that impairs performance for some HPC programs. [10]

SMART division

In 2018 Penguin Computing was bought by the publicly traded SMART Global Holdings for a $60 million purchase price, with a further $25 million due if Penguin Computing hits certain profit milestones. The $60 million included the assumption of Penguin Computing’s debts, the company has borrowed $33 million from Wells Fargo to fund a new manufacturing facility. In the first quarter of financial year 2018 Penguin Computing had a gross profits of $10.3 million on sales of $48.5 million. HPC computer clusters were the main source of revenue. Penguin owned 1.2 percent of the HPC server market and when the company was bought by SMART it had ten supercomputers in the TOP500. [11]

See also

Related Research Articles

Supercomputer Extremely powerful computer for its era

A supercomputer is a computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, there are supercomputers which can perform over 1017 FLOPS (a hundred quadrillion FLOPS, 100 petaFLOPS or 100 PFLOPS).

Beowulf cluster

A Beowulf cluster is a computer cluster of what are normally identical, commodity-grade computers networked into a small local area network with libraries and programs installed which allow processing to be shared among them. The result is a high-performance parallel computing cluster from inexpensive personal computer hardware.

Quadrics (company)

Quadrics was a supercomputer company formed in 1996 as a joint venture between Alenia Spazio and the technical team from Meiko Scientific. They produced hardware and software for clustering commodity computer systems into massively parallel systems. Their highpoint was in June 2003 when six out of the ten fastest supercomputers in the world were based on Quadrics' interconnect. They officially closed on June 29, 2009.

NetApp American technology company

NetApp, Inc. is an American hybrid cloud data services and data management company headquartered in Sunnyvale, California. It has ranked in the Fortune 500 since 2012. Founded in 1992 with an IPO in 1995, NetApp offers cloud data services for management of applications and data both online and physically.

Lustre is a type of parallel distributed file system, generally used for large-scale cluster computing. The name Lustre is a portmanteau word derived from Linux and cluster. Lustre file system software is available under the GNU General Public License and provides high performance file systems for computer clusters ranging in size from small workgroup clusters to large-scale, multi-site systems. Since June 2005, Lustre has consistently been used by at least half of the top ten, and more than 60 of the top 100 fastest supercomputers in the world, including the world's No. 1 ranked TOP500 supercomputer in June 2020, Fugaku, as well as previous top supercomputers such as Titan and Sequoia.

Inspur, whose full name is Inspur Group, is an information technology conglomerate in mainland China focusing on cloud computing, big data, key application hosts, servers, storage, artificial intelligence and ERP. On April 18, 2006, Inspur changed its English name from Langchao to Inspur. It is listed on the SSE, SZSE, and SEHK. It owns Inspur Information, Inspur Software, Inspur International and Huaguang Optoelectronics, and VIT, C.A..

SUSE Open-source software company

SUSE is a German-based multinational open-source software company that develops and sells Linux products to business customers. Founded in 1992, it was the first company to market Linux for enterprise. It is the developer of SUSE Linux Enterprise and the primary sponsor of the community-supported openSUSE Project, which develops the openSUSE Linux distribution. While the openSUSE "Tumbleweed" variation is an upstream distribution for both the "Leap" variation and SUSE Linux Enterprise distribution, its branded "Leap" variation is part of a direct upgrade path to the enterprise version, which effectively makes openSUSE Leap a non-commercial version of its enterprise product.

The Texas Advanced Computing Center (TACC) at the University of Texas at Austin, United States, is an advanced computing research center that provides comprehensive advanced computing resources and support services to researchers in Texas and across the USA. The mission of TACC is to enable discoveries that advance science and society through the application of advanced computing technologies. Specializing in high performance computing, scientific visualization, data analysis & storage systems, software, research & development and portal interfaces, TACC deploys and operates advanced computational infrastructure to enable computational research activities of faculty, staff, and students of UT Austin. TACC also provides consulting, technical documentation, and training to support researchers who use these resources. TACC staff members conduct research and development in applications and algorithms, computing systems design/architecture, and programming tools and environments.

National Energy Research Scientific Computing Center Supercomputer facility operated by the US Department of Energy in Berkeley, California

The National Energy Research Scientific Computing Center (NERSC), is a high performance computing (supercomputer) user facility operated by Lawrence Berkeley National Laboratory for the United States Department of Energy Office of Science. As the mission computing center for the Office of Science, NERSC houses high performance computing and data systems used by 7,000 scientists at national laboratories and universities around the country. NERSC's newest and largest supercomputer is Cori, which was ranked 5th on the TOP500 list of world's fastest supercomputers in November 2016. NERSC is located on the main Berkeley Lab campus in Berkeley, California.

The IBM Intelligent Cluster was a cluster solution for x86-based high-performance computing composed primarily of IBM components, integrated with network switches from various vendors and optional high-performance InfiniBand interconnects.

Computer cluster Set of computers configured in a distributed computing system

A computer cluster is a set of computers that work together so that they can be viewed as a single system. Unlike grid computers, computer clusters have each node set to perform the same task, controlled and scheduled by software.

Windows HPC Server 2008, released by Microsoft on 22 September 2008, is the successor product to Windows Compute Cluster Server 2003. Like WCCS, Windows HPC Server 2008 is designed for high-end applications that require high performance computing clusters. This version of the server software is claimed to efficiently scale to thousands of cores. It includes features unique to HPC workloads: a new high-speed NetworkDirect RDMA, highly efficient and scalable cluster management tools, a service-oriented architecture (SOA) job scheduler, an MPI library based on open-source MPICH2, and cluster interoperability through standards such as the High Performance Computing Basic Profile (HPCBP) specification produced by the Open Grid Forum (OGF).

AMAX Information Technologies

AMAX Engineering Corporation is a privately held technology company based in Fremont, California. Founded in 1979, AMAX specializes in application-tailored cloud computing, data center, deep learning, open-architecture platforms, HPC and OEM solutions.

BeeGFS

BeeGFS is a parallel file system, developed and optimized for high-performance computing. BeeGFS includes a distributed metadata architecture for scalability and flexibility reasons. Its most important aspect is data throughput.

Exalogic is a computer appliance made by Oracle Corporation, commercially available since 2010. It is a cluster of x86-64-servers running Oracle Linux or Solaris preinstalled.

Appro American technology company

Appro was a developer of supercomputing supporting High Performance Computing (HPC) markets focused on medium- to large-scale deployments. Appro was based in Milpitas, California with a computing center in Houston, Texas, and a manufacturing and support subsidiary in South Korea and Japan.

PSSC Labs

PSSC Labs is a California-based company that provides supercomputing solutions in the United States and internationally. Its products include "high-performance" servers, clusters, workstations, and RAID storage systems for scientific research, government and military, entertainment content creators, developers, and private clouds. The company has implemented clustering software from NASA Goddard's Beowulf project in its supercomputers designed for bioinformatics, medical imaging, computational chemistry and other scientific applications.

Singularity (software) Free, cross-platform and open-source computer program

Singularity is a free, cross-platform and open-source computer program that performs operating-system-level virtualization also known as containerization.

Inspur Server Series is a series of server computers introduced in 1993 by Inspur, an information technology company, and later expanded to the international markets. The servers were likely among the first originally manufactured by a Chinese company. It is currently developed by Inspur Information and its San Francisco-based subsidiary company - Inspur Systems, both Inspur's spinoff companies. The product line includes GPU Servers, Rack-mounted servers, Open Computing Servers and Multi-node Servers.

References

  1. "Linux Server Company Penguin Computing to Acquire Scyld Computing, the Leading Developer of Beowulf High-Performance Clustering Software". Business Wire. 10 June 2003. Retrieved 3 June 2016.
  2. Hemsoth, Nicole (1 December 2015). "Penguin Charts Fresh Trajectory for Open Hardware". Signal Peak Ventures. Archived from the original on 30 June 2016. Retrieved 3 June 2016.
  3. "NNSA Announces Procurement of Penguin Computing Clusters to Support Stockpile Stewardship at National Labs". NNSA Press Releases. National Nuclear Security Administration. 20 October 2015. Retrieved 16 May 2016.
  4. Trader, Tiffany (6 January 2016). "Penguin Computing Mines Commodity Gold". HPC Wire. Retrieved 6 June 2016.
  5. "Intel® Omni Path Architecture Makes Serious Headway". Top500 The List. 29 March 2016. Archived from the original on 13 June 2016. Retrieved 6 June 2016.
  6. Bergman, Phillip (31 March 2016). "Penguin Computing Receives Partner of the Year - HPC Technical Computing Award at Intel Solutions Summit". PRWEB. Retrieved 3 June 2016.
  7. Brueckner, Rich (2 January 2016). "Penguin Computing is now Platinum Member of Open Compute Project (OCP)". Inside HPC. Retrieved 2 June 2016.
  8. "Penguin Computing's Tundra Extreme Scale Series Implements Solution Targeted for OCP-Compliant High Performance Computing Emerson Network Power's DC Power System". HPC Today. 16 November 2015. Retrieved 2 June 2016.
  9. Eadline, Douglas. "Moving HPC to the Cloud". Admin Magazine. Admin Magazine. Retrieved 30 March 2019.
  10. Niccolai, James (11 August 2009). "Penguin Puts High-performance Computing in the Cloud". PCWorld. IDG Consumer & SMB. Retrieved 6 June 2016.
  11. Michael Feldman (12 June 2018). "Penguin Computing Acquired by SMART Global Holding" . Retrieved 30 March 2019.