Repository | https://github.com/OurGrid/OurGrid |
---|---|
Written in | Java |
Type | Grid computing |
License | LGPL-3.0 GPL-3.0 |
Website | https://ourgrid.org/ |
OurGrid is an open-source grid middleware based on a peer-to-peer architecture. OurGrid was mainly developed at the Federal University of Campina Grande (Brazil), which has run an OurGrid instance named "OurGrid" since December 2004. [1] Anyone can freely join it to gain access to large amount of computational power and run parallel applications. This computational power is provided by the idle resources of all participants, and is shared in a way that makes those who contribute more get more when they need. Currently, the platform can be used to run any application whose tasks (i.e. parts that run on a single machine) do not communicate among themselves during execution, like most simulations, data mining and searching. [2]
The OurGrid software is written in Java. Any operating system which can run the Java virtual machine can participate in the grid. It consists of four parts: Broker, Worker, Peer and Discovery Service. The Broker is used when the user needs to use the grid for some computation. The Worker is used when the user doesn't need to compute anything at the moment but wants to provide idle computation resources in order to gain the reputation in the network. The Peer is used when the user controls multiple machines and it allows to control the connected Workers. The Discovery Service allows multiple Peers to interact and exchange their computational resources. [3]
The Worker supports virtualization to isolate tasks from the host's file system and the Internet. Without the virtualization, malicious users could upload a task which connects to the Internet and organize a DDoS attack. [3]
To discourage users from using only the Broker for computation and not providing any computational resources in return, OurGrid uses the mechanism called the Network of Favors. The user gains reputation in the network by providing idle computational resources for the grid. When the user with high reputation requests computation from the grid, their queries have higher priority. [3] [4]
The Network of Favors assumes that every user seeks to obtain more computational resources. The user's reputation is stored locally by the peers who directly interacted with the user. The reputation never becomes negative, otherwise malicious users could just create a new identity with a clean reputation. [4]
In 2013, Marek Šimon, Ladislav Huraj and Vladimír Siládi analyzed the performance bottlenecks of P2P grid applications such as OurGrid. They found out that the task will not be effectively solved in the network if it has the large overhead of distributing data between the workers. They devised a task which uses interpolation methods to determine the snow cover depth. Afterwards, they compared the time it took for the sequential algorithm to solve the task and the time it took for the grid to solve parallelized version of the task. They discovered that there was no increase of efficiency due to the large overhead for data distribution and collection. The other application which deals with radioactive decay showed the increase of efficiency due to the large volume of data, more complex computations and small overhead of data distribution compared to the computation itself. [5]
The client–server model is a distributed application structure that partitions tasks or workloads between the providers of a resource or service, called servers, and service requesters, called clients. Often clients and servers communicate over a computer network on separate hardware, but both client and server may reside in the same system. A server host runs one or more server programs, which share their resources with clients. A client usually does not share any of its resources, but it requests content or service from a server. Clients, therefore, initiate communication sessions with servers, which await incoming requests. Examples of computer applications that use the client–server model are email, network printing, and the World Wide Web.
A distributed system is a system whose components are located on different networked computers, which communicate and coordinate their actions by passing messages to one another. Distributed computing is a field of computer science that studies distributed systems.
Peer-to-peer (P2P) computing or networking is a distributed application architecture that partitions tasks or workloads between peers. Peers are equally privileged, equipotent participants in the network, forming a peer-to-peer network of nodes.
Grid computing is the use of widely distributed computer resources to reach a common goal. A computing grid can be thought of as a distributed system with non-interactive workloads that involve many files. Grid computing is distinguished from conventional high-performance computing systems such as cluster computing in that grid computers have each node set to perform a different task/application. Grid computers also tend to be more heterogeneous and geographically dispersed than cluster computers. Although a single grid can be dedicated to a particular application, commonly a grid is used for a variety of purposes. Grids are often constructed with general-purpose grid middleware software libraries. Grid sizes can be quite large.
HTCondor is an open-source high-throughput computing software framework for coarse-grained distributed parallelization of computationally intensive tasks. It can be used to manage workload on a dedicated cluster of computers, or to farm out work to idle desktop computers – so-called cycle scavenging. HTCondor runs on Linux, Unix, Mac OS X, FreeBSD, and Microsoft Windows operating systems. HTCondor can integrate both dedicated resources and non-dedicated desktop machines into one computing environment.
Utility computing, or computer utility, is a service provisioning model in which a service provider makes computing resources and infrastructure management available to the customer as needed, and charges them for specific usage rather than a flat rate. Like other types of on-demand computing, the utility model seeks to maximize the efficient use of resources and/or minimize associated costs. Utility is the packaging of system resources, such as computation, storage and services, as a metered service. This model has the advantage of a low or no initial cost to acquire computer resources; instead, resources are essentially rented.
A problem solving environment (PSE) is a completed, integrated and specialised computer software for solving one class of problems, combining automated problem-solving methods with human-oriented tools for guiding the problem resolution. A PSE may also assist users in formulating problem resolution. A PSE may also assist users in formulating problems, selecting algorithm, simulating numerical value and viewing and analysing results.
A Sybil attack is a type of attack on a computer network service in which an attacker subverts the service's reputation system by creating a large number of pseudonymous identities and uses them to gain a disproportionately large influence. It is named after the subject of the book Sybil, a case study of a woman diagnosed with dissociative identity disorder. The name was suggested in or before 2002 by Brian Zill at Microsoft Research. The term pseudospoofing had previously been coined by L. Detweiler on the Cypherpunks mailing list and used in the literature on peer-to-peer systems for the same class of attacks prior to 2002, but this term did not gain as much influence as "Sybil attack".
Distributed networking is a distributed computing network system where components of the program and data depend on multiple sources.
Sideband computing is where a user connects to some normal network service, and a separate communication channel is opened through which a server distributes tasks to the clients. Through sideband computing method, any network server which has a lot of clients can form into a large-scale super-computing network. During this process, the resources in the clients could be utilized through the central server so long as the main channel is maintained. Sideband computing is related to the distributed computing and multiple communication channels.
A computer cluster is a set of computers that work together so that they can be viewed as a single system. Unlike grid computers, computer clusters have each node set to perform the same task, controlled and scheduled by software. The newest manifestation of cluster computing is cloud computing.
Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Large clouds often have functions distributed over multiple locations, each of which is a data center. Cloud computing relies on sharing of resources to achieve coherence and typically uses a pay-as-you-go model, which can help in reducing capital expenses but may also lead to unexpected operating expenses for users.
Dynamic Infrastructure is an information technology concept related to the design of data centers, whereby the underlying hardware and software can respond dynamically and more efficiently to changing levels of demand. In other words, data center assets such as storage and processing power can be provisioned to meet surges in user's needs. The concept has also been referred to as Infrastructure 2.0 and Next Generation Data Center.
Open Cobalt is a free and open-source software platform for constructing, accessing, and sharing virtual worlds both on local area networks or across the Internet, with no need for centralized servers.
Techila Distributed Computing Engine is a commercial grid computing software product. It speeds up simulation, analysis and other computational applications by enabling scalability across the IT resources in user's on-premises data center and in the user's own cloud account. Techila Distributed Computing Engine is developed and licensed by Techila Technologies Ltd, a privately held company headquartered in Tampere, Finland. The product is also available as an on-demand solution in Google Cloud Launcher, the online marketplace created and operated by Google. According to IDC, the solution enables organizations to create HPC infrastructure without the major capital investments and operating expenses required by new HPC hardware.
DIET is a software for grid-computing. As middleware, DIET sits between the operating system and the application software. DIET was created in 2000. It was designed for high-performance computing. It is currently developed by INRIA, École Normale Supérieure de Lyon, CNRS, Claude Bernard University Lyon 1, SysFera. It is open-source software released under the CeCILL license.
A distributed file system for cloud is a file system that allows many clients to have access to data and supports operations on that data. Each data file may be partitioned into several parts called chunks. Each chunk may be stored on different remote machines, facilitating the parallel execution of applications. Typically, data is stored in files in a hierarchical tree, where the nodes represent directories. There are several ways to share files in a distributed architecture: each solution must be suitable for a certain type of application, depending on how complex the application is. Meanwhile, the security of the system must be ensured. Confidentiality, availability and integrity are the main keys for a secure system.
Computation offloading is the transfer of resource intensive computational tasks to a separate processor, such as a hardware accelerator, or an external platform, such as a cluster, grid, or a cloud. Offloading to a coprocessor can be used to accelerate applications including: image rendering and mathematical calculations. Offloading computing to an external platform over a network can provide computing power and overcome hardware limitations of a device, such as limited computational power, storage, and energy.
Social cloud computing, also peer-to-peer social cloud computing, is an area of computer science that generalizes cloud computing to include the sharing, bartering and renting of computing resources across peers whose owners and operators are verified through a social network or reputation system. It expands cloud computing past the confines of formal commercial data centers operated by cloud providers to include anyone interested in participating within the cloud services sharing economy. This in turn leads to more options, greater economies of scale, while bearing additional advantages for hosting data and computing services closer to the edge where they may be needed most.