Collaborative virtual environment

Last updated

Collaborative virtual environments are used for collaboration and interaction of possibly many participants that may be spread over large distances. Typical examples are distributed simulations, 3D multiplayer games, collaborative engineering software, collaborative learning applications, [1] and others. The applications are usually based on the shared virtual environment. Because of the spreading of participants and the communication latency, some data consistency model have to be used to keep the data consistent.

The consistency model influences deeply the programming model of the application. One classification is introduced in [2] based on several criteria, like centralized/distributed architecture, type of replication, and performance and consistency properties. Four types of consistency models were described, covering the most frequently used types of collaborative virtual environment architecture:

Collaborative virtual environment architectures:

Centralized Primaries Consistency Model.png
Distributed Primaries Consistency Model.png
Centralized primaires
Distributed Primaries
 
Data Ownership Consistency Model.png
Active Replication Consistency Model.png
Data ownership
Active Replication
 
All primary replicas of each data item resides on the same computer called server.
Advantages: complete server control over the scene
Disadvantages: performance is limited by the server computer
Primary replicas are distributed among the computers.
Advantages: high performance and scalability
Disadvantages: difficult programming model, weaker consistency
Used in: Distributed Interactive Simulation, Repo-3D, [3] [4]
Primaries are allowed to migrate among the computers. This approach is often called system with transferable data ownership.
Advantages: more flexibility compared to Distributed Primaries
Disadvantages: high amount of ownership requests may limit the system performance
Used in: MASSIVE-3/HIVEK, Blue-c, CIAO, [5] SPLINE
Active transactions
Active replication uses peer-to-peer approach while all replicas are equal. Usually, atomic broadcast is used to deliver updates to all of them, thus they are kept synchronized.
Advantages: complete scene synchronization (equal scene content on all computers)
Disadvantages: the performance is limited by the slowest computer in the system
Used in: active transactions, Age of Empires, Avango, DIVE

Related Research Articles

<span class="mw-page-title-main">Peer-to-peer</span> Type of decentralized and distributed network architecture

Peer-to-peer (P2P) computing or networking is a distributed application architecture that partitions tasks or workloads between peers. Peers are equally privileged, equipotent participants in the network. They are said to form a peer-to-peer network of nodes.

<span class="mw-page-title-main">Thin client</span> Non-powerful computer optimized for remote server access

In computer networking, a thin client is a simple (low-performance) computer that has been optimized for establishing a remote connection with a server-based computing environment. They are sometimes known as network computers, or in their simplest form as zero clients. The server does most of the work, which can include launching software programs, performing calculations, and storing data. This contrasts with a rich client or a conventional personal computer; the former is also intended for working in a client–server model but has significant local processing power, while the latter aims to perform its function mostly locally.

Scalability is the property of a system to handle a growing amount of work by adding resources to the system.

In computer science, a consistency model specifies a contract between the programmer and a system, wherein the system guarantees that if the programmer follows the rules for operations on memory, memory will be consistent and the results of reading, writing, or updating memory will be predictable. Consistency models are used in distributed systems like distributed shared memory systems or distributed data stores. Consistency is different from coherence, which occurs in systems that are cached or cache-less, and is consistency of data with respect to all processors. Coherence deals with maintaining a global order in which writes to a single location or single variable are seen by all processors. Consistency deals with the ordering of operations to multiple locations with respect to all processors.

In computer science, distributed shared memory (DSM) is a form of memory architecture where physically separated memories can be addressed as a single shared address space. The term "shared" does not mean that there is a single centralized memory, but that the address space is shared—i.e., the same physical address on two processors refers to the same location in memory. Distributed global address space (DGAS), is a similar term for a wide class of software and hardware implementations, in which each node of a cluster has access to shared memory in addition to each node's private memory.

Parallel rendering is the application of parallel programming to the computational domain of computer graphics. Rendering graphics can require massive computational resources for complex scenes that arise in scientific visualization, medical visualization, CAD applications, and virtual reality. Recent research has also suggested that parallel rendering can be applied to mobile gaming to decrease power consumption and increase graphical fidelity. Rendering is an embarrassingly parallel workload in multiple domains and thus has been the subject of much research.

Multi-master replication is a method of database replication which allows data to be stored by a group of computers, and updated by any member of the group. All members are responsive to client data queries. The multi-master replication system is responsible for propagating the data modifications made by each member to the rest of the group and resolving any conflicts that might arise between concurrent changes made by different members.

Replication in computing involves sharing information so as to ensure consistency between redundant resources, such as software or hardware components, to improve reliability, fault-tolerance, or accessibility.

In computing, a shared resource, or network share, is a computer resource made available from one host to other hosts on a computer network. It is a device or piece of information on a computer that can be remotely accessed from another computer transparently as if it were a resource in the local machine. Network sharing is made possible by inter-process communication over the network.

Optimistic replication, also known as lazy replication, is a strategy for replication, in which replicas are allowed to diverge.

In computing, virtualization or virtualisation is the act of creating a virtual version of something at the same abstraction level, including virtual computer hardware platforms, storage devices, and computer network resources.

<span class="mw-page-title-main">Open Cobalt</span> Software for creating virtual worlds

Open Cobalt is a free and open-source software platform for constructing, accessing, and sharing virtual worlds both on local area networks or across the Internet, with no need for centralized servers.

<span class="mw-page-title-main">Live distributed object</span>

Live distributed object refers to a running instance of a distributed multi-party protocol, viewed from the object-oriented perspective, as an entity that has a distinct identity, may encapsulate internal state and threads of execution, and that exhibits a well-defined externally visible behavior.

Software-defined networking (SDN) technology is an approach to network management that enables dynamic, programmatically efficient network configuration in order to improve network performance and monitoring, making it more like cloud computing than traditional network management. SDN is meant to address the static architecture of traditional networks. SDN attempts to centralize network intelligence in one network component by disassociating the forwarding process of network packets from the routing process. The control plane consists of one or more controllers, which are considered the brain of the SDN network where the whole intelligence is incorporated. However, centralization has its own drawbacks when it comes to security, scalability and elasticity and this is the main issue of SDN.

<span class="mw-page-title-main">Data grid</span> Set of services used to access, modify and transfer geographical data

A data grid is an architecture or set of services that gives individuals or groups of users the ability to access, modify and transfer extremely large amounts of geographically distributed data for research purposes. Data grids make this possible through a host of middleware applications and services that pull together data and resources from multiple administrative domains and then present it to users upon request. The data in a data grid can be located at a single site or multiple sites where each site can be its own administrative domain governed by a set of security restrictions as to who may access the data. Likewise, multiple replicas of the data may be distributed throughout the grid outside their original administrative domain and the security restrictions placed on the original data for who may access it must be equally applied to the replicas. Specifically developed data grid middleware is what handles the integration between users and the data they request by controlling access while making it available as efficiently as possible. The adjacent diagram depicts a high level view of a data grid.

<span class="mw-page-title-main">Oracle NoSQL Database</span>

Oracle NoSQL Database is a NoSQL-type distributed key-value database from Oracle Corporation. It provides transactional semantics for data manipulation, horizontal scalability, and simple administration and monitoring.

A distributed file system for cloud is a file system that allows many clients to have access to data and supports operations on that data. Each data file may be partitioned into several parts called chunks. Each chunk may be stored on different remote machines, facilitating the parallel execution of applications. Typically, data is stored in files in a hierarchical tree, where the nodes represent directories. There are several ways to share files in a distributed architecture: each solution must be suitable for a certain type of application, depending on how complex the application is. Meanwhile, the security of the system must be ensured. Confidentiality, availability and integrity are the main keys for a secure system.

In distributed computing, a conflict-free replicated data type (CRDT) is a data structure that is replicated across multiple computers in a network, with the following features:

  1. The application can update any replica independently, concurrently and without coordinating with other replicas.
  2. An algorithm automatically resolves any inconsistencies that might occur.
  3. Although replicas may have different state at any particular point in time, they are guaranteed to eventually converge.

Dew computing is an information technology (IT) paradigm that combines the core concept of cloud computing with the capabilities of end devices. It is used to enhance the experience for the end user in comparison to only using cloud computing. Dew computing attempts to solve major problems related to cloud computing technology, such as reliance on internet access. Dropbox is an example of the dew computing paradigm, as it provides access to the files and folders in the cloud in addition to keeping copies on local devices. This allows the user to access files during times without an internet connection; when a connection is established again, files and folders are synchronized back to the cloud server.

Distributed block storage is a computer data storage architecture that the data is stored in volumes across multiple physical servers, as opposed to other storage architectures like file systems which manages data as a file hierarchy, and object storage which manages data as objects. A common distributed block storage system is a Storage Area Network (SAN).

References

  1. Sedlák, Michal; Šašinka, Čeněk; Stachoň, Zdeněk; Chmelík, Jiří; Doležal, Milan (2022-10-18). "Collaborative and individual learning of geography in immersive virtual reality: An effectiveness study". PLOS ONE. 17 (10): e0276267. Bibcode:2022PLoSO..1776267S. doi: 10.1371/journal.pone.0276267 . ISSN   1932-6203. PMC   9578614 . PMID   36256672.
  2. Pečiva, J. 2007. Active Transactions in Collaborative Virtual Environments. PhD Thesis, Brno, Czech Republic, FIT VUT, ISBN   978-80-214-3549-0
  3. MacIntyre, B.; Feiner, S (July 1998). A distributed 3D graphics library (PDF). SIGGRAPH '98 – Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques. New York, NY. pp. 361–370. doi:10.1145/280814.280935.
  4. DIV, DOOM
  5. Sung, U., Yang, J., and Wohn, K. 1999. Concurrency Control in CIAO. In Proceedings of the IEEE Virtual Reality (March 13 – 17, 1999). VR. IEEE Computer Society, Washington, DC, 22