International Conference on High Performance Computing

Last updated
HiPC-logo.gif

The International Conference on High Performance Computing (or HiPC) is an international meeting on high performance computing. It serves as a forum to present current work by researchers from around the world as well as highlight activities in Asia in the high performance computing area. The meeting focuses on all aspects of high performance computing systems and their scientific, engineering, and commercial applications. [1]

Related Research Articles

Power management is a feature of some electrical appliances, especially copiers, computers, computer CPUs, computer GPUs and computer peripherals such as monitors and printers, that turns off the power or switches the system to a low-power state when inactive. In computing this is known as PC power management and is built around a standard called ACPI, this supersedes APM. All recent computers have ACPI support.

International Solid-State Circuits Conference is a global forum for presentation of advances in solid-state circuits and Systems-on-a-Chip. The Conference offers a unique opportunity for engineers working at the cutting edge of IC design to maintain technical currency, and to network with leading experts. It is held every year in February at the San Francisco Marriott hotel in downtown San Francisco. ISSCC is sponsored by IEEE Solid-State Circuits Society.

Commodity computing involves the use of large numbers of already-available computing components for parallel computing, to get the greatest amount of useful computation at low cost. It is computing done in commodity computers as opposed to in high-cost superminicomputers or in boutique computers. Commodity computers are computer systems - manufactured by multiple vendors - incorporating components based on open standards. Such systems are said to be based on commodity components, since the standardization process promotes lower costs and less differentiation among vendors' products. Standardization and decreased differentiation lower the switching or exit cost from any given vendor, increasing purchasers' leverage and preventing lock-in. A governing principle of commodity computing is that it is preferable to have more low-performance, low-cost hardware working in parallel than to have fewer high-performance, high-cost hardware items. At some point, the number of discrete systems in a cluster will be greater than the mean time between failures (MTBF) for any hardware platform, no matter how reliable, so fault tolerance must be built into the controlling software. Purchases should be optimized on cost-per-unit-of-performance, not just on absolute performance-per-CPU at any cost.

David Bader (computer scientist) American computer scientist

David A. Bader is a Distinguished Professor and Director of the Institute for Data Science at the New Jersey Institute of Technology. Previously, he served as a Professor, Chair of the School of Computational Science and Engineering, and Executive Director of High-Performance Computing in the Georgia Tech College of Computing. In addition, Bader was selected as the director of the first Sony Toshiba IBM Center of Competence for the Cell Processor at the Georgia Institute of Technology. He is an IEEE Fellow, AAAS Fellow, SIAM Fellow. His main areas of research are in at the intersection of high-performance computing and real-world applications, including cybersecurity, massive-scale analytics, and computational genomics.

SCinet is the high-performance network built annually by volunteers in support of SC . SCinet is the primary network for the yearly conference and is used by attendees and exhibitors to demonstrate and test high-performance computing and networking applications.

HPX, short for High Performance ParalleX, is a runtime system for high-performance computing. It is currently under active development by the STE||AR group at Louisiana State University. Focused on scientific computing, it provides an alternative execution model to conventional approaches such as MPI. HPX aims to overcome the challenges MPI faces with increasing large supercomputers by using asynchronous communication between nodes and lightweight control objects instead of global barriers, allowing application developers to exploit fine-grained parallelism.

Ne-XVP was a research project executed between 2006-2008 at NXP Semiconductors. The project undertook a holistic approach to define a next generation multimedia processing architecture for embedded MPSoCs that targets programmability, performance scalability, and silicon efficiency in an evolutionary way. The evolutionary way implies using existing processor cores such as NXP TriMedia as building blocks and supporting industry programming standards such as POSIX threads. Based on the technology-aware design space exploration, the project concluded that hardware accelerators facilitating task management and coherency coupled with right dimensioning of compute cores deliver good programmability, scalable performance and competitive silicon efficiency.

SC, the International Conference for High Performance Computing, Networking, Storage and Analysis, is the annual conference established in 1988 by the Association for Computing Machinery and the IEEE Computer Society. In 2016, about 11,000 people participated overall. The not-for-profit conference is run by a committee of approximately 600 volunteers who spend roughly three years organizing each conference.

Dr. Subhash Saini is a senior computer scientist at NASA. He received a Ph.D. from the University of Southern California and has held positions at University of California at Los Angeles (UCLA), University of California at Berkeley (UCB), and Lawrence Livermore National Laboratory (LLNL).

Dell Wyse

Wyse is an American manufacturer of cloud computing systems. They are best known for their video terminal line introduced in the 1980s, which competed with the market leading Digital. They also had a successful line of IBM PC compatible workstations in the mid-to-late 1980s, but were outcompeted by companies such as Dell starting late in the decade. Current products include thin client hardware and software as well as desktop virtualization solutions. Other products include cloud software-supporting desktop computers, laptops, and mobile devices. Dell Cloud Client Computing is partnered with IT vendors such as Citrix, IBM, Microsoft, and VMware.

Quasi-opportunistic supercomputing

Quasi-opportunistic supercomputing is a computational paradigm for supercomputing on a large number of geographically disperse computers. Quasi-opportunistic supercomputing aims to provide a higher quality of service than opportunistic resource sharing.

Admission control is a validation process in communication systems where a check is performed before a connection is established to see if current resources are sufficient for the proposed connection.

The University of California High-Performance AstroComputing Center (UC-HiPACC), based at the University of California at Santa Cruz (UCSC), is consortium of nine University of California campuses and three Department of Energy laboratories. The goal of the consortium is to support and facilitate original research and education in computational astrophysics, and to engage in public outreach and education. The UC-HiPACC consortium sponsors or co-sponsors conferences and workshops and an annual advanced international summer school at a UC campus. It promotes educational outreach to the public, and maintains a website featuring the latest UC news and findings in computational astronomy, a large archive of lecture videos and presentations, and a gallery of supercomputer-generated astrophysics videos and images.

Heterogeneous computing refers to systems that use more than one kind of processor or cores. These systems gain performance or energy efficiency not just by adding the same type of processors, but by adding dissimilar coprocessors, usually incorporating specialized processing capabilities to handle particular tasks.

ACM SIGHPC

ACM SIGHPC is the Association for Computing Machinery's Special Interest Group on High Performance Computing, an international community of students, faculty, researchers, and practitioners working on research and in professional practice related to supercomputing, high-end computers, and cluster computing. The organization co-sponsors international conferences related to high performance and scientific computing, including: SC, the International Conference for High Performance Computing, Networking, Storage and Analysis; the Platform for Advanced Scientific Computing (PASC) Conference; and PPoPP, the Symposium on Principles and Practice of Parallel Programming.

Bare Machine Computing (BMC) is a programming paradigm based on bare machines. In the BMC paradigm, applications run without the support of any operating system (OS) or centralized kernel i.e., no intermediary software is loaded on the bare machine prior to running applications. The applications, which are called bare machine applications or simply BMC applications, do not use any persistent storage or a hard disk, and instead are stored on detachable mass storage such as a USB flash drive. A BMC program consists of a single application or a small set of applications that runs as a single executable within one address space. BMC applications have direct access to the necessary hardware resources. They are self-contained, self-managed and self-controlled entities that boot, load and run without using any other software components or external software. BMC applications have inherent security due to their design. There are no OS-related vulnerabilities, and each application only contains the necessary (minimal) functionality. There is no privileged mode in a BMC system since applications only run in user mode. Also, application code is statically compiled-there is no means to dynamically alter BMC program flow during execution.

References

  1. "HPCAC Announces the 9th Swiss Annual HPC Conference in Collaboration With HPCXXL User Group". HPCwire. Retrieved 2018-01-18.