This article needs additional citations for verification .(July 2021) |
Predecessor | Global Grid Forum (2002), Grid Forum (1998) |
---|---|
Merged into | Enterprise Grid Alliance (merged in 2006) |
Formation | 2006 |
Type | Standards Development Organization |
Purpose | Developing standards for grids and creating grid communities |
Chair, Board of Directors | Andrew Grimshaw |
President | Alan Sill |
VP of Standards | Jens Jensen |
VP of Community | Wolfgang Ziegler |
Website | www |
The Open Grid Forum (OGF) is a community of users, developers, and vendors for standardization of grid computing. It was formed in 2006 in a merger of the Global Grid Forum and the Enterprise Grid Alliance. The OGF models its process on the Internet Engineering Task Force (IETF), and produces documents with many acronyms such as OGSA, OGSI, and JSDL.
The OGF has two principal functions plus an administrative function: being the standards organization for grid computing, and building communities within the overall grid community (including extending it within both academia and industry). Each of these function areas is then divided into groups of three types: working groups with a generally tightly defined role (usually producing a standard), research groups with a looser role bringing together people to discuss developments within their field and generate use cases and spawn working groups, and community groups (restricted to community functions).
Three meetings are organized per year, divided (approximately evenly after averaging over a number of years) between North America, Europe and East Asia. Many working groups organize face-to-face meetings in the interim.
The concept of a forum to bring together developers, practitioners, and users of distributed computing (known as grid computing at the time) was discussed at a "Birds of a Feather" session in November 1998 at the SC98 supercomputing conference. [1] Based on response to the idea during this BOF, Ian Foster and Bill Johnston convened the first Grid Forum meeting at NASA Ames Research Center in June 1999, drawing roughly 100 people, mostly from the US. A group of organizers nominated Charlie Catlett (from Argonne National Laboratory and the University of Chicago) to serve as the initial chair, confirmed via a plenary vote was held at the second Grid Forum meeting in Chicago in October 1999. [2] [3] With advice and assistance from the Internet Engineering Task Force (IETF), OGF established a process based on the IETF. OGF is managed by a steering group.
During 1998, groups similar to Grid Forum began to organize in Europe (called eGrid) and Japan. Discussions among leaders of these groups resulted in combining to form the Global Grid Forum which met for the first time in Amsterdam in March 2001. GGF-1 in Amsterdam followed five Grid Forum meetings. Catlett served as GGF Chair for two 3-year terms and was succeeded by Mark Linesch (from Hewlett-Packard) in September 2004. The Enterprise Grid Alliance (EGA), formed in 2004, was more focused on large data center businesses such as EMC Corporation, NetApp, and Oracle Corporation. [4] [5] At GGF-18 (the 23rd gathering of the forum, counting the first five GF meetings) in September 2006, GGF became Open Grid Forum (OGF) based on a merger with EGA. [6] In September 2007, Craig Lee of the Aerospace Corporation became chair. [7]
Some technologies specified by OGF include:
In addition to technical standards, the OGF published community-developed informational and experimental documents.
The first version of the DRMAA API was implemented in Sun's Grid engine and also in the University of Wisconsin-Madison's program Condor cycle scavenger. The separate Globus Alliance maintains an implementation of some of these standards through the Globus Toolkit. A release of UNICORE is based on the OGSA architecture and JSDL.
Grid computing is the use of widely distributed computer resources to reach a common goal. A computing grid can be thought of as a distributed system with non-interactive workloads that involve many files. Grid computing is distinguished from conventional high-performance computing systems such as cluster computing in that grid computers have each node set to perform a different task/application. Grid computers also tend to be more heterogeneous and geographically dispersed than cluster computers. Although a single grid can be dedicated to a particular application, commonly a grid is used for a variety of purposes. Grids are often constructed with general-purpose grid middleware software libraries. Grid sizes can be quite large.
Open Grid Services Architecture (OGSA) describes a service-oriented architecture for a grid computing environment for business and scientific use. It was developed within the Open Grid Forum, which was called the Global Grid Forum (GGF) at the time, around 2002 to 2006.
HTCondor is an open-source high-throughput computing software framework for coarse-grained distributed parallelization of computationally intensive tasks. It can be used to manage workload on a dedicated cluster of computers, or to farm out work to idle desktop computers – so-called cycle scavenging. HTCondor runs on Linux, Unix, Mac OS X, FreeBSD, and Microsoft Windows operating systems. HTCondor can integrate both dedicated resources and non-dedicated desktop machines into one computing environment.
The Open Grid Services Infrastructure (OGSI) was published by the Global Grid Forum (GGF) as a proposed recommendation in June 2003. It was intended to provide an infrastructure layer for the Open Grid Services Architecture (OGSA). OGSI takes the statelessness issues into account by essentially extending Web services to accommodate grid computing resources that are both transient and stateful.
UNICORE (UNiform Interface to COmputing REsources) is a grid computing technology for resources such as supercomputers or cluster systems and information stored in databases. UNICORE was developed in two projects funded by the German ministry for education and research (BMBF). In European-funded projects UNICORE evolved to a middleware system used at several supercomputer centers. UNICORE served as a basis in other research projects. The UNICORE technology is open source under BSD licence and available at SourceForge.
TeraGrid was an e-Science grid computing infrastructure combining resources at eleven partner sites. The project started in 2001 and operated from 2004 through 2011.
Job Submission Description Language is an extensible XML specification from the Global Grid Forum for the description of simple tasks to non-interactive computer execution systems. Currently at version 1.0, the specification focuses on the description of computational task submissions to traditional high-performance computer systems like batch schedulers.
Platform Computing was a privately held software company primarily known for its job scheduling product, Load Sharing Facility (LSF). It was founded in 1992 in Toronto, Ontario, Canada and headquartered in Markham, Ontario with 11 branch offices across the United States, Europe and Asia.
Distributed Resource Management Application API (DRMAA) is a high-level Open Grid Forum (OGF) API specification for the submission and control of jobs to a distributed resource management (DRM) system, such as a cluster or grid computing infrastructure. The scope of the API covers all the high level functionality required for applications to submit, control, and monitor jobs on execution resources in the DRM system.
CDDLM is a Global Grid Forum standard for the management, deployment and configuration of Grid Service lifecycles or inter-organization resources.
EPCC, formerly the Edinburgh Parallel Computing Centre, is a supercomputing centre based at the University of Edinburgh. Since its foundation in 1990, its stated mission has been to accelerate the effective exploitation of novel computing throughout industry, academia and commerce.
Advanced Resource Connector (ARC) is a grid computing middleware introduced by NorduGrid. It provides a common interface for submission of computational tasks to different distributed computing systems and thus can enable grid infrastructures of varying size and complexity. The set of services and utilities providing the interface is known as ARC Computing Element (ARC-CE). ARC-CE functionality includes data staging and caching, developed in order to support data-intensive distributed computing. ARC is an open source software distributed under the Apache License 2.0.
Charlie Catlett is a senior computer scientist at Argonne National Laboratory and a visiting senior fellow at the Mansueto Institute for Urban Innovation at the University of Chicago. From 2020 to 2022 he was a senior research scientist at the University of Illinois Discovery Partners Institute. He was previously a senior computer scientist at Argonne National Laboratory and a senior fellow in the Computation Institute, a joint institute of Argonne National Laboratory and The University of Chicago, and a senior fellow at the University of Chicago's Harris School of Public Policy.
The Simple API for Grid Applications (SAGA) is a family of related standards specified by the Open Grid Forum to define an application programming interface (API) for common distributed computing functionality.
OMII-UK is an open-source software organisation for the UK research community.
Windows HPC Server 2008, released by Microsoft on 22 September 2008, is the successor product to Windows Compute Cluster Server 2003. Like WCCS, Windows HPC Server 2008 is designed for high-end applications that require high performance computing clusters. This version of the server software is claimed to efficiently scale to thousands of cores. It includes features unique to HPC workloads: a new high-speed NetworkDirect RDMA, highly efficient and scalable cluster management tools, a service-oriented architecture (SOA) job scheduler, an MPI library based on open-source MPICH2, and cluster interoperability through standards such as the High Performance Computing Basic Profile (HPCBP) specification produced by the Open Grid Forum (OGF).
gLite is a middleware computer software project for grid computing used by the CERN LHC experiments and other scientific domains. It was implemented by collaborative efforts of more than 80 people in 12 different academic and industrial research centers in Europe. gLite provides a framework for building applications tapping into distributed computing and storage resources across the Internet. The gLite services were adopted by more than 250 computing centres, and used by more than 15000 researchers in Europe and around the world.
The Open Cloud Computing Interface (OCCI) is a set of specifications delivered through the Open Grid Forum, for cloud computing service providers. OCCI has a set of implementations that act as proofs of concept. It builds upon World Wide Web fundamentals by using the Representational State Transfer (REST) approach for interacting with services.
Data Format Description Language, published as an Open Grid Forum Recommendation in February 2021, is a modeling language for describing general text and binary data in a standard way. A DFDL model or schema allows any text or binary data to be read from its native format and to be presented as an instance of an information set.. The same DFDL schema also allows data to be taken from an instance of an information set and written out to its native format.
GridRPC in distributed computing, is Remote Procedure Call over a grid. This paradigm has been proposed by the GridRPC working group of the Open Grid Forum (OGF), and an API has been defined in order for clients to access remote servers as simply as a function call. It is used among numerous Grid middleware for its simplicity of implementation, and has been standardized by the OGF in 2007. For interoperability reasons between the different existing middleware, the API has been followed by a document describing good use and behavior of the different GridRPC API implementations. Works have then been conducted on the GridRPC Data Management, which has been standardized in 2011.