PSSC Labs

Last updated
PSSC Labs
Company type Private
Industry IT hardware,
IT services
Founded California, United States (1984)
FounderLarry Lesser
Headquarters,
United States
Number of locations
2
Area served
Worldwide
Key people
Alex Lesser (Vice president), [1]
Larry Lesser (President), [1]
Janice Lesser (CEO),
Eric Lesser (Director of Operations)
Products Computing clouds, supercomputers, big data storage servers [2]
Website www.pssclabs.com

PSSC Labs is a California-based company that provides supercomputing solutions[ buzzword ] in the United States and internationally. Its products include "high-performance" servers, clusters, workstations, and RAID storage systems for scientific research, government and military, entertainment content creators, developers, and private clouds. [3] The company has implemented clustering software from NASA Goddard's Beowulf project in its supercomputers designed for bioinformatics, medical imaging, computational chemistry and other scientific applications. [4]

Contents

Timeline

PSSC Labs was founded in 1984 by Larry Lesser. In 1998, it manufactured the Aeneas Supercomputer for Dr. Herbert Hamber of the University of California, Irvine (the physics and astronomy department [5] ); it was based on Linux and had a maximum speed of 20.1 Gigaflops. [6] [7]

In 2001, the company developed CBeST, software packages, utilities and custom scripts used to ease the cluster administration process. [8]

In 2003 the company released the third version of its cluster management software with support for 32-bit and 64-bit AMD and Intel processors, Linux kernel and other open source tools. [9]

In 2005, PSSC Labs demonstrated its new water-cooling technology for high-performance computers at the ACM/IEEE Supercomputing Conference in Seattle, Washington. [10]

In 2007 the company focused on supercomputer development for life sciences researchers and announced its technological solution for full-genome data analysis, including assembly, read mapping, and analysis of large amounts of high-throughput DNA and RNA sequencing data. [11]

In 2008 PSSC Labs designed the Powerserve Quattro I/A 4000 supercomputer for genome sequencing. [12] In 2013 it released CloudOOP Server Platform for Big Data Analytics / Hadoop Server which offers up to 50TB of storage space in just 1RU. [13]

The company Joined Cloudera Partner Program the following year and certified the CloudOOP 12000 in 2014 which is compatible with Cloudera Enterprise 5. In the same year MapR started using CloudOOP 12000 platform for record setting time series data base ingestion rate [14] and the company Joined Hortonworks Partner Program.

In 2015 the company was CloudOOP 12000 certified which is Compatible with Hortonworks HDP 2.2.

Related Research Articles

<span class="mw-page-title-main">Supercomputer</span> Type of extremely powerful computer

A supercomputer is a type of computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, supercomputers have existed, which can perform over 1017 FLOPS (a hundred quadrillion FLOPS, 100 petaFLOPS or 100 PFLOPS). For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013). Since November 2017, all of the world's fastest 500 supercomputers run on Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.

<span class="mw-page-title-main">High-performance computing</span> Computing with supercomputers and clusters

High-performance computing (HPC) uses supercomputers and computer clusters to solve advanced computation problems.

GPFS is high-performance clustered file system software developed by IBM. It can be deployed in shared-disk or shared-nothing distributed parallel modes, or a combination of these. It is used by many of the world's largest commercial companies, as well as some of the supercomputers on the Top 500 List. For example, it is the filesystem of the Summit at Oak Ridge National Laboratory which was the #1 fastest supercomputer in the world in the November 2019 TOP500 list of supercomputers. Summit is a 200 Petaflops system composed of more than 9,000 POWER9 processors and 27,000 NVIDIA Volta GPUs. The storage filesystem called Alpine has 250 PB of storage using Spectrum Scale on IBM ESS storage hardware, capable of approximately 2.5TB/s of sequential I/O and 2.2TB/s of random I/O.

Apache Hadoop is a collection of open-source software utilities that facilitates using a network of many computers to solve problems involving massive amounts of data and computation. It provides a software framework for distributed storage and processing of big data using the MapReduce programming model. Hadoop was originally designed for computer clusters built from commodity hardware, which is still the common use. It has since also found use on clusters of higher-end hardware. All the modules in Hadoop are designed with a fundamental assumption that hardware failures are common occurrences and should be automatically handled by the framework.

<span class="mw-page-title-main">Dell EMC Isilon</span> Network-attached storage

Dell EMC Isilon is a scale out network-attached storage platform offered by Dell EMC for high-volume storage, backup and archiving of unstructured data. It provides a cluster-based storage array based on industry standard hardware, and is scalable to 50 petabytes in a single filesystem using its FreeBSD-derived OneFS file system.

<span class="mw-page-title-main">Apache Solr</span> Open-source enterprise-search platform

Solr is an open-source enterprise-search platform, written in Java. Its major features include full-text search, hit highlighting, faceted search, real-time indexing, dynamic clustering, database integration, NoSQL features and rich document handling. Providing distributed search and index replication, Solr is designed for scalability and fault tolerance. Solr is widely used for enterprise search and analytics use cases and has an active development community and regular releases.

Windows HPC Server 2008, released by Microsoft on 22 September 2008, is the successor product to Windows Compute Cluster Server 2003. Like WCCS, Windows HPC Server 2008 is designed for high-end applications that require high performance computing clusters. This version of the server software is claimed to efficiently scale to thousands of cores. It includes features unique to HPC workloads: a new high-speed NetworkDirect RDMA, highly efficient and scalable cluster management tools, a service-oriented architecture (SOA) job scheduler, an MPI library based on open-source MPICH2, and cluster interoperability through standards such as the High Performance Computing Basic Profile (HPCBP) specification produced by the Open Grid Forum (OGF).

Cloudera, Inc. is an American data lake software company.

HPCC, also known as DAS, is an open source, data-intensive computing system platform developed by LexisNexis Risk Solutions. The HPCC platform incorporates a software architecture implemented on commodity computing clusters to provide high-performance, data-parallel processing for applications utilizing big data. The HPCC platform includes system configurations to support both parallel batch data processing (Thor) and high-performance online query applications using indexed data files (Roxie). The HPCC platform also includes a data-centric declarative programming language for parallel data processing called ECL.

<span class="mw-page-title-main">Supercomputing in Europe</span> Overview of supercomputing in Europe

Several centers for supercomputing exist across Europe, and distributed access to them is coordinated by European initiatives to facilitate high-performance computing. One such initiative, the HPC Europa project, fits within the Distributed European Infrastructure for Supercomputing Applications (DEISA), which was formed in 2002 as a consortium of eleven supercomputing centers from seven European countries. Operating within the CORDIS framework, HPC Europa aims to provide access to supercomputers across Europe.

<span class="mw-page-title-main">Hortonworks</span> American software company

Hortonworks was a data software company based in Santa Clara, California that developed and supported open-source software designed to manage big data and associated processing.

<span class="mw-page-title-main">Appro</span> American technology company

Appro was a developer of supercomputing supporting High Performance Computing (HPC) markets focused on medium- to large-scale deployments. Appro was based in Milpitas, California with a computing center in Houston, Texas, and a manufacturing and support subsidiary in South Korea and Japan.

Cycle Computing is a company that provides software for orchestrating computing and storage resources in cloud environments. The flagship product is CycleCloud, which supports Amazon Web Services, Google Compute Engine, Microsoft Azure, and internal infrastructure. The CycleCloud orchestration suite manages the provisioning of cloud infrastructure, orchestration of workflow execution and job queue management, automated and efficient data placement, full process monitoring and logging, within a secure process flow.

Apache Phoenix is an open source, massively parallel, relational database engine supporting OLTP for Hadoop using Apache HBase as its backing store. Phoenix provides a JDBC driver that hides the intricacies of the NoSQL store enabling users to create, delete, and alter SQL tables, views, indexes, and sequences; insert and delete rows singly and in bulk; and query data through SQL. Phoenix compiles queries and other statements into native NoSQL store APIs rather than using MapReduce enabling the building of low latency applications on top of NoSQL stores.

The Cray Urika-XA extreme analytics platform, manufactured by supercomputer maker Cray Inc., was an appliance that analyzes the massive amounts of data—usually called big data—that supercomputers collect. It was introduced in 2015 and discontinued in 2017. Organizations that use supercomputers have traditionally used multiple smaller off-the-shelf systems for data analysis. But as organizations see a dramatic increase in the amount of data they collect—everything from research data to retail transactions—they need data analytics systems that can make sense of it and help them use it strategically. In a nod to organizations that lean toward open-source software, the Urika-XA comes pre-installed with Cloudera Enterprise Hadoop and Apache Spark.

<span class="mw-page-title-main">Apache ORC</span> Column-oriented data storage format

Apache ORC is a free and open-source column-oriented data storage format. It is similar to the other columnar-storage file formats available in the Hadoop ecosystem such as RCFile and Parquet. It is used by most of the data processing frameworks Apache Spark, Apache Hive, Apache Flink and Apache Hadoop.

<span class="mw-page-title-main">JUWELS</span> Supercomputer in Germany

JUWELS is a supercomputer developed by Atos and hosted by the Jülich Supercomputing Centre (JSC) of the Forschungszentrum Jülich. It is capable of a theoretical peak of 70.980 petaflops and it serves as the replacement of the now out-of-operation JUQUEEN supercomputer. JUWELS Booster Module was ranked as the seventh fastest supercomputer in the world at its debut on the November 2020 TOP500 list. The JUWELS Booster Module is part of a modular system architecture and a second Xeon based JUWELS Cluster Module ranked separately as the 44th fastest supercomputer in the world on the November 2020 TOP500 list.

The Tri-Lab Operating System Stack (TOSS) is a Linux distribution based on Red Hat Enterprise Linux (RHEL) that was created to provide a software stack for high performance computing (HPC) clusters for laboratories within the National Nuclear Security Administration (NNSA). The operating system allows multiple smaller systems to emulate a high-performance computing (HPC) platform.

References

  1. 1 2 Ken Farmer (20 July 2006). "Five Questions for Alex Lesser, Vice President of PSSC Labs". LinuxHPC.org. Retrieved 17 January 2014.
  2. Jae K. Shim, Siegel (December 1999). Information Systems Management Handbook Supplement Series. Prentice Hall PTR. p. 168. ISBN   9780130124180.
  3. Sorin Nita (3 November 2011). "OCZ's Deneva 2 SSDs Get Qualified to PSSC Labs Systems". Softpedia . SoftNews NET. Retrieved 17 January 2014.
  4. Steve Silva (2008). Web Server Administration. Cengage Learning. p. 64. ISBN   9781423903239.
  5. Herbert W. Hamber (14 December 2000). "Aeneas Supercomputer". University of California at Irvine. Department of Physics. Retrieved 17 January 2014.
  6. Marc H. Levine, Jae K. Shim; Anique Qureshi (2004). The International Handbook of Computer Networks. Global Professional Publishing. p. 103. ISBN   9781858820590.
  7. "Conquering Computing: AENEAS Supercomputer". The DrAnteater Newsletter. The Office of Research & Graduate Studies. 1998. Archived from the original on 27 January 2014. Retrieved 17 January 2014.
  8. Nathan Eddy (5 July 2012). "Ingram Micro, PSSC Labs Partner on HPC Products". The VAR Guy. Penton Media . Retrieved 17 January 2014.
  9. "PSSC Labs Releases CBeST 3.0". Sysadmin. R&D Publications. 12. 2003. Retrieved 17 January 2014.
  10. Kim Peterson (16 November 2005). "Colossal computers turn on technologists at SC|05". The Seattle Times . Retrieved 18 January 2014.
  11. "CLC, PSSC Offer New Full-Genome Data Solution". Contract Pharma Magazine. Rodman Media. 12 August 2010. Retrieved 18 January 2014.
  12. "PSSC Labs Looks to Expand its Reach in the Life Sciences Market with New Hardware Release". BioInform. Genomeweb LLC. 8 April 2011. Retrieved 17 January 2014.
  13. "High density hadoop server". Archived from the original on 30 January 2015. Retrieved 29 January 2015.
  14. "Loading time series database 100 million points second" . Retrieved 29 January 2015.