Radix heap

Last updated

A radix heap is a data structure for realizing the operations of a monotone priority queue. A set of elements to which a key is assigned can then be managed. The run time of the operations depends on the difference between the largest and smallest key or constant. The data structure consists mainly of a series of buckets, the size of which increases exponentially.

Contents

Prerequisites

  1. all keys are natural numbers;
  2. max. key - min. key C for constant C;
  3. the extract-min operation is monotonic; that is, the values returned by successive extract-min calls are monotonically increasing.

Description of data structure

The three most important fields are:

  1. of size , with 0 as the lowest index, stores the buckets;
  2. of size , with 0 as the lowest index, store the (lower) bounds of the buckets;
  3. , holds for each element in the heap the bucket in which it is stored.

RadixHeap1.png

The above diagram shows the data structure. The following invariants apply:

  1. key in : the keys in are up or down through the value in or limited.
  2. and for : the sizes of the buckets increase exponentially.

It is important to note the exponential growth of the limits (and thus the range that a bucket holds). In this way the logarithmic dependence of the field quantities is of value C, the maximum difference between two key values.

Operations

During initialization, empty buckets are generated and the lower bounds are generated (according to invariant 2); running time .

During insert, a new element is linearly moved from right to left through the buckets and the new element with is stored in the left bucket to that ; running time .

For decrease-key, first the key value is decreased (checking for compliance with the invariants). Then, the field is used to locate the element and it is iterated to the left, if necessary, analogously to the insert operation. The running time is (amortized).

The extract-min operation removes an element from bucket and returns it. If the bucket is not yet empty, the operation is terminated. If, however, it is empty, the next larger non-empty bucket is searched, its smallest element tracked and is set to k (monotonicity is required for this). Then, according to the invariants, the bucket boundaries are redefined and the elements removed to the newly formed buckets; running time (amortized).

If displayed, the field is updated.

Related Research Articles

<span class="mw-page-title-main">Hash table</span> Associative array for storing key-value pairs

In computing, a hash table, also known as a hash map or a hash set, is a data structure that implements an associative array, also called a dictionary, which is an abstract data type that maps keys to values. A hash table uses a hash function to compute an index, also called a hash code, into an array of buckets or slots, from which the desired value can be found. During lookup, the key is hashed and the resulting hash indicates where the corresponding value is stored.

<span class="mw-page-title-main">Heap (data structure)</span> Computer science data structure

In computer science, a heap is a tree-based data structure that satisfies the heap property: In a max heap, for any given node C, if P is a parent node of C, then the key of P is greater than or equal to the key of C. In a min heap, the key of P is less than or equal to the key of C. The node at the "top" of the heap is called the root node.

<span class="mw-page-title-main">Monotonic function</span> Order-preserving mathematical function

In mathematics, a monotonic function is a function between ordered sets that preserves or reverses the given order. This concept first arose in calculus, and was later generalized to the more abstract setting of order theory.

<span class="mw-page-title-main">Binary heap</span> Variant of heap data structure

A binary heap is a heap data structure that takes the form of a binary tree. Binary heaps are a common way of implementing priority queues. The binary heap was introduced by J. W. J. Williams in 1964, as a data structure for heapsort.

In the mathematical areas of order and lattice theory, the Knaster–Tarski theorem, named after Bronisław Knaster and Alfred Tarski, states the following:

In computer science, a Fibonacci heap is a data structure for priority queue operations. It has a better amortized running time than many other priority queue data structures including the binary heap and binomial heap. consisting of a collection of heap-ordered trees. Fibonacci heaps were originally explained to be an extension of binomial heaps. Michael L. Fredman and Robert E. Tarjan developed Fibonacci heaps in 1984 and published them in a scientific journal in 1987. Fibonacci heaps are named after the Fibonacci numbers, which are used in their running time analysis.

<span class="mw-page-title-main">Perfect hash function</span> Hash function without any collisions

In computer science, a perfect hash functionh for a set S is a hash function that maps distinct elements in S to a set of m integers, with no collisions. In mathematical terms, it is an injective function.

In computing, a persistent data structure or not ephemeral data structure is a data structure that always preserves the previous version of itself when it is modified. Such data structures are effectively immutable, as their operations do not (visibly) update the structure in-place, but instead always yield a new updated structure. The term was introduced in Driscoll, Sarnak, Sleator, and Tarjan's 1986 article.

In computer science, a 2–3 heap is a data structure, a variation on the heap, designed by Tadao Takaoka in 1999. The structure is similar to the Fibonacci heap, and borrows from the 2–3 tree.

In computer science, a leftist tree or leftist heap is a priority queue implemented with a variant of a binary heap. Every node x has an s-value which is the distance to the nearest leaf in subtree rooted at x. In contrast to a binary heap, a leftist tree attempts to be very unbalanced. In addition to the heap property, leftist trees are maintained so the right descendant of each node has the lower s-value.

In computational complexity theory, the potential method is a method used to analyze the amortized time and space complexity of a data structure, a measure of its performance over sequences of operations that smooths out the cost of infrequent but expensive operations.

<span class="mw-page-title-main">Dedekind number</span> Combinatorial sequence of numbers

In mathematics, the Dedekind numbers are a rapidly growing sequence of integers named after Richard Dedekind, who defined them in 1897. The Dedekind number M(n) is the number of monotone boolean functions of n variables. Equivalently, it is the number of antichains of subsets of an n-element set, the number of elements in a free distributive lattice with n generators, and one more than the number of abstract simplicial complexes on a set with n elements.

In computer science, dynamic perfect hashing is a programming technique for resolving collisions in a hash table data structure. While more memory-intensive than its hash table counterparts, this technique is useful for situations where fast queries, insertions, and deletions must be made on a large set of elements.

In computer science, a skew binomial heap is a variant of the binomial heap that supports constant-time insertion operations in the worst case, rather than the logarithmic worst case and constant amortized time of ordinary binomial heaps.

<span class="mw-page-title-main">Queap</span>

In computer science, a queap is a priority queue data structure. The data structure allows insertions and deletions of arbitrary elements, as well as retrieval of the highest-priority element. Each deletion takes amortized time logarithmic in the number of items that have been in the structure for a longer time than the removed item. Insertions take constant amortized time.

In computer science, an optimal binary search tree (Optimal BST), sometimes called a weight-balanced binary tree, is a binary search tree which provides the smallest possible search time (or expected search time) for a given sequence of accesses (or access probabilities). Optimal BSTs are generally divided into two types: static and dynamic.

In computer science, a shadow heap is a mergeable heap data structure which supports efficient heap merging in the amortized sense. More specifically, shadow heaps make use of the shadow merge algorithm to achieve insertion in O(f(n)) amortized time and deletion in O((log n log log n)/f(n)) amortized time, for any choice of 1 ≤ f(n) ≤ log log n.

In computer science, a monotone priority queue is a variant of the priority queue abstract data type in which the priorities of extracted items are required to form a monotonic sequence. That is, for a priority queue in which each successively extracted item is the one with the minimum priority, the minimum priority should be monotonically increasing. Conversely for a max-heap the maximum priority should be monotonically decreasing. The assumption of monotonicity arises naturally in several applications of priority queues, and can be used as a simplifying assumption to speed up certain types of priority queues.

<span class="mw-page-title-main">Bucket queue</span> Data structure for integer priorities

A bucket queue is a data structure that implements the priority queue abstract data type: it maintains a dynamic collection of elements with numerical priorities and allows quick access to the element with minimum priority. In the bucket queue, the priorities must be integers, and it is particularly suited to applications in which the priorities have a small range. A bucket queue has the form of an array of buckets: an array data structure, indexed by the priorities, whose cells contain collections of items with the same priority as each other. With this data structure, insertion of elements and changes of their priority take constant time. Searching for and removing the minimum-priority element takes time proportional to the number of buckets or, by maintaining a pointer to the most recently found bucket, in time proportional to the difference in priorities between successive operations.

In computer science, a strict Fibonacci heap is a priority queue data structure with low worst case time bounds. It matches the amortized time bounds of the Fibonacci heap in the worst case. To achieve these time bounds, strict Fibonacci heaps maintain several invariants by performing restoring transformations after every operation. These transformations can be done in constant time by using auxiliary data structures to track violations, and applying the pigeonhole principle. Strict Fibonacci heaps were invented in 2012 by Gerth S. Brodal, George Lagogiannis, and Robert E. Tarjan.

References