Typical view of HDF Explorer | |
Initial release | 1998 |
---|---|
Stable release | 1.5.009 / October 08, 2015 |
Operating system | Windows |
Website | www.space-research.org |
HDF Explorer is a data visualization program that reads the HDF, HDF5 and netCDF data file formats. It runs in the Microsoft Windows operating systems. HDF Explorer was developed by Space Research Software, LLC, headquartered in Urbana-Champaign, Illinois.
Scientific visualization is an interdisciplinary branch of science concerned with the visualization of scientific phenomena. It is also considered a subset of computer graphics, a branch of computer science. The purpose of scientific visualization is to graphically illustrate scientific data to enable scientists to understand, illustrate, and glean insight from their data.
Hierarchical Data Format (HDF) is a set of file formats designed to store and organize large amounts of data. Originally developed at the National Center for Supercomputing Applications, it is supported by The HDF Group, a non-profit corporation whose mission is to ensure continued development of HDF5 technologies and the continued accessibility of data stored in HDF.
NetCDF is a set of software libraries and self-describing, machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data. The project homepage is hosted by the Unidata program at the University Corporation for Atmospheric Research (UCAR). They are also the chief source of netCDF software, standards development, updates, etc. The format is an open standard. NetCDF Classic and 64-bit Offset Format are an international standard of the Open Geospatial Consortium.
The Space Telescope Science Institute (STScI) is the science operations center for the Hubble Space Telescope (HST) and for the James Webb Space Telescope (JWST). STScI is located on the Johns Hopkins University Homewood Campus in Baltimore, Maryland and was established in 1981 as a community-based science center that is operated for NASA by the Association of Universities for Research in Astronomy (AURA). In addition to performing continuing science operations of HST and preparing for scientific exploration with JWST, STScI manages and operates the Mikulski Archive for Space Telescopes (MAST), the Data Management Center for the Kepler mission and a number of other activities benefiting from its expertise in and infrastructure for supporting the operations of space-based astronomical observatories. Most of the funding for STScI activities comes from contracts with NASA's Goddard Space Flight Center but there are smaller activities funded by NASA's Ames Research Center, NASA's Jet Propulsion Laboratory, and the European Space Agency (ESA). The staff at STScI consists of scientists, software engineers, data management and telescope operations personnel, education and public outreach experts, and administrative and business support personnel. There are approximately 100 Ph.D. scientists working at STScI, 15 of which are ESA staff who are on assignment to the HST project. The total STScI staff consists of about 450 people.
Flexible Image Transport System (FITS) is an open standard defining a digital file format useful for storage, transmission and processing of data: formatted as N-dimensional arrays, or tables. FITS is the most commonly used digital file format in astronomy. The FITS standard has special (optional) features for scientific data, for example it includes many provisions for describing photometric and spatial calibration information, together with image origin metadata.
The Hubble Deep Field (HDF) is an image of a small region in the constellation Ursa Major, constructed from a series of observations by the Hubble Space Telescope. It covers an area about 2.6 arcminutes on a side, about one 24-millionth of the whole sky, which is equivalent in angular size to a tennis ball at a distance of 100 metres. The image was assembled from 342 separate exposures taken with the Space Telescope's Wide Field and Planetary Camera 2 over ten consecutive days between December 18 and December 28, 1995.
OpenDX stands for Open Data Explorer and is IBM's scientific data visualization software. It can handle complex domains along with measured or computed data. The data may be scalar, vector or tensor fields at different points of the object. The points at which data is measured don't have to be equally spaced, and not need to be homogeneously spaced. The project started in 1991 as Visualization Data Explorer.
Common Data Format (CDF) is a library and toolkit that was developed by the National Space Science Data Center (NSSDC) at NASA starting in 1985. The software is an interface for the storage and manipulation of multi-dimensional data sets.
The Hubble Deep Field South is a composite of several hundred individual images taken using the Hubble Space Telescope's Wide Field and Planetary Camera 2 over 10 days in September and October 1998. It followed the great success of the original Hubble Deep Field in facilitating the study of extremely distant galaxies in early stages of their evolution. While the WFPC2 took very deep optical images, nearby fields were simultaneously imaged by the Space Telescope Imaging Spectrograph (STIS) and the Near Infrared Camera and Multi-Object Spectrometer (NICMOS).
Tecplot is the name of a family of visualization & analysis software tools developed by Tecplot, Inc., which is headquartered in Bellevue, Washington. The firm was formerly operated as Amtec Engineering. In 2016, the firm was acquired by Vela Software, an operating group of Constellation Software, Inc. (TSX:CSU).
Apache Hadoop is a collection of open-source software utilities that facilitate using a network of many computers to solve problems involving massive amounts of data and computation. It provides a software framework for distributed storage and processing of big data using the MapReduce programming model. Originally designed for computer clusters built from commodity hardware—still the common use—it has also found use on clusters of higher-end hardware. All the modules in Hadoop are designed with a fundamental assumption that hardware failures are common occurrences and should be automatically handled by the framework.
Giovanni (meteorology) - Web interface that allows users to analyze NASA's gridded data from various satellite and surface observations.
HBase is an open-source, non-relational, distributed database modeled after Google's Bigtable and written in Java. It is developed as part of Apache Software Foundation's Apache Hadoop project and runs on top of HDFS or Alluxio, providing Bigtable-like capabilities for Hadoop. That is, it provides a fault-tolerant way of storing large quantities of sparse data.
XMDF is a library providing a standard format for the geometric data storage of river cross-sections, 2D/3D structured and unstructured meshes, geometric paths through space, and associated time data. XMDF uses HDF5 for cross-platform data storage and compression. It was initiated in Engineer Research and Development Center (ERDC) and developed at Brigham Young University. API includes interfaces for C/C++ and Fortran.
Cultural analytics is the exploration and analysis of massive cultural data sets of visual material – both digitized visual artifacts and contemporary visual and interactive media. Taking on the challenge of how to best explore large collections of rich cultural content, cultural analytics researchers developed new methods and intuitive visual techniques which rely on high-resolution visualization and digital image processing. These methods are used to address both the existing research questions in humanities, to explore new questions, and to develop new theoretical concepts which fit the mega-scale of digital culture in the early 21st century.
GEOMS – Generic Earth Observation Metadata Standard is a metadata standard used for archiving data from groundbased networks, like the NDACC, and for using this kind of data for the validation of NASA and ESA satellite data.
Apache Drill is an open-source software framework that supports data-intensive distributed applications for interactive analysis of large-scale datasets. Drill is the open source version of Google's Dremel system which is available as an infrastructure service called Google BigQuery. One explicitly stated design goal is that Drill is able to scale to 10,000 servers or more and to be able to process petabytes of data and trillions of records in seconds. Drill is an Apache top-level project.
H2O is open-source software for big-data analysis. It is produced by the company H2O.ai. H2O allows users to fit thousands of potential models as part of discovering patterns in data.
Apache Impala is an open source massively parallel processing (MPP) SQL query engine for data stored in a computer cluster running Apache Hadoop. Impala has been described as the open-source equivalent of Google F1, which inspired its development in 2012.
A distributed file system for cloud is a file system that allows many clients to have access to data and supports operations on that data. Each data file may be partitioned into several parts called chunks. Each chunk may be stored on different remote machines, facilitating the parallel execution of applications. Typically, data is stored in files in a hierarchical tree, where the nodes represent directories. There are several ways to share files in a distributed architecture: each solution must be suitable for a certain type of application, depending on how complex the application is. Meanwhile, the security of the system must be ensured. Confidentiality, availability and integrity are the main keys for a secure system.
Apache Apex is a YARN-native platform that unifies stream and batch processing. It processes big data-in-motion in a way that is scalable, performant, fault-tolerant, stateful, secure, distributed, and easily operable.