Regular semantics

Last updated

Regular semantics is a computer hardware consistency model. It describes a type of guarantee provided by a processor register that is shared by several processors in a parallel machine or in a network of computers working together. Regular semantics are defined for a variable with a single writer but multiple readers. These semantics are stronger than safe semantics but weaker than atomic semantics: they guarantee that there is a total order to the write operations consistent with real-time and that read operations return either the value of the last write completed before the read begins, or that of one of the writes which are concurrent with the read.

Contents

Example

Regular semantics are weaker than linearizability. Consider the example shown below, where the horizontal axis represents time, and the arrows represent the interval during which a read or write operation takes place. According to a regular register's definition, the third read should return 3 since the read operation is not concurrent with any write operation. On the other hand, the second read may return 2 or 3, and the first read may return either 5 or 2. The first read could return 3 and the second read could return 2. This behavior would not satisfy atomic semantics. Therefore, regular semantics is a weaker property than an atomic semantics. On the other hand, Leslie Lamport proved that a linearizable register may be implemented from registers with safe semantics, which are weaker than regular registers. [1]

Safe Register Safe but not linerazible.jpg
Safe Register

A Theorem from Regularity to Atomicity

A single-writer multi-reader (SWMR) atomic semantics is an SWMR regular register if any of its execution history H satisfies the following property: r1 and r2 are any two read invocations: (r1 →H r2) ⇒ ¬π(r2) →H π(r1)

Before getting into the proof, first, it should be understood what the new/old inversion means. As it shown in the picture below, by looking at the execution it can be seen that the only difference between a regular execution and an atomic execution is when a = 0 and b = 1.In this execution, when considering the two read invocations R.read() → a followed by R.read() → b, our first value (new value) is a = 0 while the second value (old value) is b=1. This is actually the main difference between atomicity and regularity.

figure1 Atomic MWMR2.jpg
figure1

The theorem above states that a Single writer multi-reader regular register without new or old inversion is an atomic register. From it, R.read() → a →H R.read() → b and R.write(1) →H R.write(0), it is not possible to have π (R.read() → b) =R.write(1) and π (R.read() → a) = R.write(0) if the execution is atomic. To prove the theorem above, first it must be proven that the register is safe, regular and that it does not allow for new/old inversion which proves the atomicity. By the definition of the atomic register, a Single writer multiple reader atomic register is regular and satisfies the no new/old inversion property. So, the only required proof is to show that a regular register with no new/old inversion is atomic.

Moreover, for any two read invocations (r1 and r2) when the register is regular and there is no new/old inversion (r1 →H r2) ⇒sn(π(r1)) ≤ sn(π(r2)). For any execution (M) there is a total order (S) which includes the same operation invocations. It can be stated that S is built as follow: starting from the total order on the write operations and the read operations being inserted as follow: first: Read operation (r) is inserted after the associated write operation (π(r)). Second: If two read operations (r1,r2) have the same (sn(r1)=sn(r2)) then first insert the operation which starts first in the execution. S includes all the operation invocation of M, from which it follows that S and M are equivalent. Since all the operations are ordered based on their sequence numbers is slightly a total order. Furthermore, this total order is an execution of M only adds an order on operations that are overlapping in M. If there is no overlapping between a read and write operations, there is no difference between the regularity and atomicity. Finally, it can be stated that S is legal since each read operation gets the last written value that comes before it in the total order. Therefore, the corresponding history is linearizable. Since this reasoning does not rely on a particular history H, it implies that the register is atomic. Since atomicity (linearizability) is a local property, it can stated that a set of SWMR regular registers behave atomically as soon as each of them satisfies the no new/old inversion property.

Related Research Articles

<span class="mw-page-title-main">Atomic semantics</span>

Atomic semantics is a type of guarantee provided by a data register shared by several processors in a parallel machine or in a network of computers working together. Atomic semantics are very strong. An atomic register provides strong guarantees even when there is concurrency and failures.

Safe semantics is a computer hardware consistency model. It describes one type of guarantee that a data register provides when it is shared by several processors in a parallel computer or in a network of computers working together.

<span class="mw-page-title-main">Dihedral group</span> Group of symmetries of a regular polygon

In mathematics, a dihedral group is the group of symmetries of a regular polygon, which includes rotations and reflections. Dihedral groups are among the simplest examples of finite groups, and they play an important role in group theory, geometry, and chemistry.

In computer science, a consistency model specifies a contract between the programmer and a system, wherein the system guarantees that if the programmer follows the rules for operations on memory, memory will be consistent and the results of reading, writing, or updating memory will be predictable. Consistency models are used in distributed systems like distributed shared memory systems or distributed data stores. Consistency is different from coherence, which occurs in systems that are cached or cache-less, and is consistency of data with respect to all processors. Coherence deals with maintaining a global order in which writes to a single location or single variable are seen by all processors. Consistency deals with the ordering of operations to multiple locations with respect to all processors.

In the domain of central processing unit (CPU) design, hazards are problems with the instruction pipeline in CPU microarchitectures when the next instruction cannot execute in the following clock cycle, and can potentially lead to incorrect computation results. Three common types of hazards are data hazards, structural hazards, and control hazards.

In the fields of databases and transaction processing, a schedule of a system is an abstract model to describe the order of executions in a set of transactions running in the system. Often it is a list of operations (actions) ordered by time, performed by a set of transactions that are executed together in the system. If the order in time between certain operations is not determined by the system, then a partial order is used. Examples of such operations are requesting a read operation, reading, writing, aborting, committing, requesting a lock, locking, etc. Often, only a subset of the transaction operation types are included in a schedule.

In computer architecture, register renaming is a technique that abstracts logical registers from physical registers. Every logical register has a set of physical registers associated with it. When a machine language instruction refers to a particular logical register, the processor transposes this name to one specific physical register on the fly. The physical registers are opaque and cannot be referenced directly but only via the canonical names.

<span class="mw-page-title-main">Race condition</span> When a systems behavior depends on timing of uncontrollable events

A race condition or race hazard is the condition of an electronics, software, or other system where the system's substantive behavior is dependent on the sequence or timing of other uncontrollable events, leading to unexpected or inconsistent results. It becomes a bug when one or more of the possible behaviors is undesirable.

<span class="mw-page-title-main">Sequential consistency</span> Consistency model in concurrent computing

Sequential consistency is a consistency model used in the domain of concurrent computing.

<span class="mw-page-title-main">Linearizability</span> Property of some operation(s) in concurrent programming

In concurrent programming, an operation is linearizable if it consists of an ordered list of invocation and response events, that may be extended by adding response events such that:

  1. The extended list can be re-expressed as a sequential history.
  2. That sequential history is a subset of the original unextended list.

Eventual consistency is a consistency model used in distributed computing to achieve high availability that informally guarantees that, if no new updates are made to a given data item, eventually all accesses to that item will return the last updated value. Eventual consistency, also called optimistic replication, is widely deployed in distributed systems and has origins in early mobile computing projects. A system that has achieved eventual consistency is often said to have converged, or achieved replica convergence. Eventual consistency is a weak guarantee – most stronger models, like linearizability, are trivially eventually consistent.

In computer architecture, a transport triggered architecture (TTA) is a kind of processor design in which programs directly control the internal transport buses of a processor. Computation happens as a side effect of data transports: writing data into a triggering port of a functional unit triggers the functional unit to start a computation. This is similar to what happens in a systolic array. Due to its modular structure, TTA is an ideal processor template for application-specific instruction set processors (ASIP) with customized datapath but without the inflexibility and design cost of fixed function hardware accelerators.

A data dependency in computer science is a situation in which a program statement (instruction) refers to the data of a preceding statement. In compiler theory, the technique used to discover data dependencies among statements is called dependence analysis.

Memory disambiguation is a set of techniques employed by high-performance out-of-order execution microprocessors that execute memory access instructions out of program order. The mechanisms for performing memory disambiguation, implemented using digital logic inside the microprocessor core, detect true dependencies between memory operations at execution time and allow the processor to recover when a dependence has been violated. They also eliminate spurious memory dependencies and allow for greater instruction-level parallelism by allowing safe out-of-order execution of loads and stores.

Memory ordering describes the order of accesses to computer memory by a CPU. The term can refer either to the memory ordering generated by the compiler during compile time, or to the memory ordering generated by a CPU during runtime.

<span class="mw-page-title-main">Helium atom</span> Atom of helium

A helium atom is an atom of the chemical element helium. Helium is composed of two electrons bound by the electromagnetic force to a nucleus containing two protons along with two neutrons, depending on the isotope, held together by the strong force. Unlike for hydrogen, a closed-form solution to the Schrödinger equation for the helium atom has not been found. However, various approximations, such as the Hartree–Fock method, can be used to estimate the ground state energy and wavefunction of the atom. Historically, the first such helium spectrum calculation was done by Albrecht Unsöld in 1927. Its success was considered to be one of the earliest signs of validity of Schrödinger's wave mechanics.

<span class="mw-page-title-main">Coxeter notation</span> Classification system for symmetry groups in geometry

In geometry, Coxeter notation is a system of classifying symmetry groups, describing the angles between fundamental reflections of a Coxeter group in a bracketed notation expressing the structure of a Coxeter-Dynkin diagram, with modifiers to indicate certain subgroups. The notation is named after H. S. M. Coxeter, and has been more comprehensively defined by Norman Johnson.

A concurrent hash-trie or Ctrie is a concurrent thread-safe lock-free implementation of a hash array mapped trie. It is used to implement the concurrent map abstraction. It has particularly scalable concurrent insert and remove operations and is memory-efficient. It is the first known concurrent data-structure that supports O(1), atomic, lock-free snapshots.

In distributed computing, a shared snapshot object is a type of data structure, which is shared between several threads or processes. For many tasks, it is important to have a data structure, that can provide a consistent view of the state of the memory. In practice, it turns out that it is not possible to get such a consistent state of the memory by just accessing one shared register after another, since the values stored in individual registers can be changed at any time during this process. To solve this problem, snapshot objects store a vector of n components and provide the following two atomic operations: update(i,v) changes the value in the ith component to v, and scan returns the values stored in all n components. Snapshot objects can be constructed using atomic single-writer multi-reader shared registers.

In distributed computing, shared-memory systems and message-passing systems are two means of interprocess communication which have been heavily studied. In shared-memory systems, processes communicate by accessing shared data structures. A shared (read/write) register, sometimes just called a register, is a fundamental type of shared data structure which stores a value and has two operations: read, which returns the value stored in the register, and write, which updates the value stored. Other types of shared data structures include read–modify–write, test-and-set, compare-and-swap etc. The memory location which is concurrently accessed is sometimes called a register.

References

  1. Lamport, Leslie (1986). "On interprocess communication - Part I: Basic formalism". Distributed Computing. 1 (2). Springer-Verlag: 86–101. doi:10.1007/BF01786228.