Priority queue

Last updated

In computer science, a priority queue is an abstract data type similar to a regular queue or stack abstract data type. Each element in a priority queue has an associated priority. In a priority queue, elements with high priority are served before elements with low priority. In some implementations, if two elements have the same priority, they are served in the same order in which they were enqueued. In other implementations, the order of elements with the same priority is undefined.

Contents

While priority queues are often implemented using heaps, they are conceptually distinct from heaps. A priority queue is an abstract data type like a list or a map; just as a list can be implemented with a linked list or with an array, a priority queue can be implemented with a heap or another method such as an ordered array.

Operations

A priority queue must at least support the following operations:

In addition, peek (in this context often called find-max or find-min), which returns the highest-priority element but does not modify the queue, is very frequently implemented, and nearly always executes in O(1) time. This operation and its O(1) performance is crucial to many applications of priority queues.

More advanced implementations may support more complicated operations, such as pull_lowest_priority_element, inspecting the first few highest- or lowest-priority elements, clearing the queue, clearing subsets of the queue, performing a batch insert, merging two or more queues into one, incrementing priority of any element, etc.

Stacks and queues can be implemented as particular kinds of priority queues, with the priority determined by the order in which the elements are inserted. In a stack, the priority of each inserted element is monotonically increasing; thus, the last element inserted is always the first retrieved. In a queue, the priority of each inserted element is monotonically decreasing; thus, the first element inserted is always the first retrieved.

Implementation

Naive implementations

There are a variety of simple, usually inefficient, ways to implement a priority queue. They provide an analogy to help one understand what a priority queue is.

For instance, one can keep all the elements in an unsorted list (O(1) insertion time). Whenever the highest-priority element is requested, search through all elements for the one with the highest priority. (O(n) pull time),

insert(node) {     list.append(node) }
pull() {     highest = list.get_first_element()     foreach node in list {         if highest.priority < node.priority {             highest = node         }     }     list.remove(highest)     return highest }

In another case, one can keep all the elements in a priority sorted list (O(n) insertion sort time), whenever the highest-priority element is requested, the first one in the list can be returned. (O(1) pull time)

insert(node) {     foreach (index, element) in list {         if node.priority < element.priority {             list.insert_at_index(node,index)             break         }     } }
pull() {     highest = list.get_at_index(list.length-1)     list.remove(highest)     return highest }

Usual implementation

To improve performance, priority queues are typically based on a heap, giving O(log n) performance for inserts and removals, and O(n) to build the heap initially from a set of n elements. Variants of the basic heap data structure such as pairing heaps or Fibonacci heaps can provide better bounds for some operations. [1]

Alternatively, when a self-balancing binary search tree is used, insertion and removal also take O(log n) time, although building trees from existing sequences of elements takes O(n log n) time; this is typical where one might already have access to these data structures, such as with third-party or standard libraries. From a space-complexity standpoint, using self-balancing binary search tree with linked list takes more storage, since it requires to store extra references to other nodes.

From a computational-complexity standpoint, priority queues are congruent to sorting algorithms. The section on the equivalence of priority queues and sorting algorithms, below, describes how efficient sorting algorithms can create efficient priority queues.

Specialized heaps

There are several specialized heap data structures that either supply additional operations or outperform heap-based implementations for specific types of keys, specifically integer keys. Suppose the set of possible keys is {1, 2, ..., C}.

For applications that do many "peek" operations for every "extract-min" operation, the time complexity for peek actions can be reduced to O(1) in all tree and heap implementations by caching the highest priority element after every insertion and removal. For insertion, this adds at most a constant cost, since the newly inserted element is compared only to the previously cached minimum element. For deletion, this at most adds an additional "peek" cost, which is typically cheaper than the deletion cost, so overall time complexity is not significantly impacted.

Monotone priority queues are specialized queues that are optimized for the case where no item is ever inserted that has a lower priority (in the case of min-heap) than any item previously extracted. This restriction is met by several practical applications of priority queues.

Summary of running times

Here are time complexities [5] of various heap data structures. The abbreviation am. indicates that the given complexity is amortized, otherwise it is a worst-case complexity. For the meaning of "O(f)" and "Θ(f)" see Big O notation. Names of operations assume a min-heap.

Operationfind-mindelete-mindecrease-keyinsertmeldmake-heap [lower-alpha 1]
Binary [5] Θ(1)Θ(log n)Θ(log n)Θ(log n)Θ(n)Θ(n)
Skew [6] Θ(1)O(log n) am.O(log n) am.O(log n) am.O(log n) am.Θ(n) am.
Leftist [7] Θ(1)Θ(log n)Θ(log n)Θ(log n)Θ(log n)Θ(n)
Binomial [5] [9] Θ(1)Θ(log n)Θ(log n)Θ(1) am.Θ(log n) [lower-alpha 2] Θ(n)
Skew binomial [10] Θ(1)Θ(log n)Θ(log n)Θ(1)Θ(log n) [lower-alpha 2] Θ(n)
2–3 heap [12] Θ(1)O(log n) am.Θ(1)Θ(1) am.O(log n) [lower-alpha 2] Θ(n)
Bottom-up skew [6] Θ(1)O(log n) am.O(log n) am.Θ(1) am.Θ(1) am.Θ(n) am.
Pairing [13] Θ(1)O(log n) am.o(log n) am. [lower-alpha 3] Θ(1)Θ(1)Θ(n)
Rank-pairing [16] Θ(1)O(log n) am.Θ(1) am.Θ(1)Θ(1)Θ(n)
Fibonacci [5] [17] Θ(1)O(log n) am.Θ(1) am.Θ(1)Θ(1)Θ(n)
Strict Fibonacci [18] [lower-alpha 4] Θ(1)Θ(log n)Θ(1)Θ(1)Θ(1)Θ(n)
Brodal [19] [lower-alpha 4] Θ(1)Θ(log n)Θ(1)Θ(1)Θ(1)Θ(n) [20]
  1. make-heap is the operation of building a heap from a sequence of n unsorted elements. It can be done in Θ(n) time whenever meld runs in O(log n) time (where both complexities can be amortized). [6] [7] Another algorithm achieves Θ(n) for binary heaps. [8]
  2. 1 2 3 For persistent heaps (not supporting decrease-key), a generic transformation reduces the cost of meld to that of insert, while the new cost of delete-min is the sum of the old costs of delete-min and meld. [11] Here, it makes meld run in Θ(1) time (amortized, if the cost of insert is) while delete-min still runs in O(log n). Applied to skew binomial heaps, it yields Brodal-Okasaki queues, persistent heaps with optimal worst-case complexities. [10]
  3. Lower bound of [14] upper bound of [15]
  4. 1 2 Brodal queues and strict Fibonacci heaps achieve optimal worst-case complexities for heaps. They were first described as imperative data structures. The Brodal-Okasaki queue is a persistent data structure achieving the same optimum, except that decrease-key is not supported.

Equivalence of priority queues and sorting algorithms

Using a priority queue to sort

The semantics of priority queues naturally suggest a sorting method: insert all the elements to be sorted into a priority queue, and sequentially remove them; they will come out in sorted order. This is actually the procedure used by several sorting algorithms, once the layer of abstraction provided by the priority queue is removed. This sorting method is equivalent to the following sorting algorithms:

NamePriority Queue ImplementationBestAverageWorst
Heapsort Heap
Smoothsort Leonardo Heap
Selection sort Unordered Array
Insertion sort Ordered Array
Tree sort Self-balancing binary search tree

Using a sorting algorithm to make a priority queue

A sorting algorithm can also be used to implement a priority queue. Specifically, Thorup says: [21]

We present a general deterministic linear space reduction from priority queues to sorting implying that if we can sort up to n keys in S(n) time per key, then there is a priority queue supporting delete and insert in O(S(n)) time and find-min in constant time.

That is, if there is a sorting algorithm which can sort in O(S) time per key, where S is some function of n and word size, [22] then one can use the given procedure to create a priority queue where pulling the highest-priority element is O(1) time, and inserting new elements (and deleting elements) is O(S) time. For example, if one has an O(n log n) sort algorithm, one can create a priority queue with O(1) pulling and O( log n) insertion.

Libraries

A priority queue is often considered to be a "container data structure".

The Standard Template Library (STL), and the C++ 1998 standard, specifies std::priority_queue as one of the STL container adaptor class templates. However, it does not specify how two elements with same priority should be served, and indeed, common implementations will not return them according to their order in the queue. It implements a max-priority-queue, and has three parameters: a comparison object for sorting such as a function object (defaults to less<T> if unspecified), the underlying container for storing the data structures (defaults to std::vector<T>), and two iterators to the beginning and end of a sequence. Unlike actual STL containers, it does not allow iteration of its elements (it strictly adheres to its abstract data type definition). STL also has utility functions for manipulating another random-access container as a binary max-heap. The Boost libraries also have an implementation in the library heap.

Python's heapq module implements a binary min-heap on top of a list.

Java's library contains a PriorityQueue class, which implements a min-priority-queue as a binary heap.

.NET's library contains a PriorityQueue class, which implements an array-backed, quaternary min-heap.

Scala's library contains a PriorityQueue class, which implements a max-priority-queue.

Go's library contains a container/heap module, which implements a min-heap on top of any compatible data structure.

The Standard PHP Library extension contains the class SplPriorityQueue.

Apple's Core Foundation framework contains a CFBinaryHeap structure, which implements a min-heap.

Applications

Bandwidth management

Priority queuing can be used to manage limited resources such as bandwidth on a transmission line from a network router. In the event of outgoing traffic queuing due to insufficient bandwidth, all other queues can be halted to send the traffic from the highest priority queue upon arrival. This ensures that the prioritized traffic (such as real-time traffic, e.g. an RTP stream of a VoIP connection) is forwarded with the least delay and the least likelihood of being rejected due to a queue reaching its maximum capacity. All other traffic can be handled when the highest priority queue is empty. Another approach used is to send disproportionately more traffic from higher priority queues.

Many modern protocols for local area networks also include the concept of priority queues at the media access control (MAC) sub-layer to ensure that high-priority applications (such as VoIP or IPTV) experience lower latency than other applications which can be served with best-effort service. Examples include IEEE 802.11e (an amendment to IEEE 802.11 which provides quality of service) and ITU-T G.hn (a standard for high-speed local area network using existing home wiring (power lines, phone lines and coaxial cables).

Usually a limitation (policer) is set to limit the bandwidth that traffic from the highest priority queue can take, in order to prevent high priority packets from choking off all other traffic. This limit is usually never reached due to high level control instances such as the Cisco Callmanager, which can be programmed to inhibit calls which would exceed the programmed bandwidth limit.

Discrete event simulation

Another use of a priority queue is to manage the events in a discrete event simulation. The events are added to the queue with their simulation time used as the priority. The execution of the simulation proceeds by repeatedly pulling the top of the queue and executing the event thereon.

See also: Scheduling (computing), queueing theory

Dijkstra's algorithm

When the graph is stored in the form of adjacency list or matrix, priority queue can be used to extract minimum efficiently when implementing Dijkstra's algorithm, although one also needs the ability to alter the priority of a particular vertex in the priority queue efficiently.

If instead, a graph is stored as node objects, and priority-node pairs are inserted into a heap, altering the priority of a particular vertex is not necessary if one tracks visited nodes. Once a node is visited, if it comes up in the heap again (having had a lower priority number associated with it earlier), it is popped-off and ignored.

Huffman coding

Huffman coding requires one to repeatedly obtain the two lowest-frequency trees. A priority queue is one method of doing this.

Best-first search algorithms

Best-first search algorithms, like the A* search algorithm, find the shortest path between two vertices or nodes of a weighted graph, trying out the most promising routes first. A priority queue (also known as the fringe) is used to keep track of unexplored routes; the one for which the estimate (a lower bound in the case of A*) of the total path length is smallest is given highest priority. If memory limitations make best-first search impractical, variants like the SMA* algorithm can be used instead, with a double-ended priority queue to allow removal of low-priority items.

ROAM triangulation algorithm

The Real-time Optimally Adapting Meshes (ROAM) algorithm computes a dynamically changing triangulation of a terrain. It works by splitting triangles where more detail is needed and merging them where less detail is needed. The algorithm assigns each triangle in the terrain a priority, usually related to the error decrease if that triangle would be split. The algorithm uses two priority queues, one for triangles that can be split and another for triangles that can be merged. In each step the triangle from the split queue with the highest priority is split, or the triangle from the merge queue with the lowest priority is merged with its neighbours.

Prim's algorithm for minimum spanning tree

Using min heap priority queue in Prim's algorithm to find the minimum spanning tree of a connected and undirected graph, one can achieve a good running time. This min heap priority queue uses the min heap data structure which supports operations such as insert, minimum, extract-min, decrease-key. [23] In this implementation, the weight of the edges is used to decide the priority of the vertices. Lower the weight, higher the priority and higher the weight, lower the priority. [24]

Parallel priority queue

Parallelization can be used to speed up priority queues, but requires some changes to the priority queue interface. The reason for such changes is that a sequential update usually only has or cost, and there is no practical gain to parallelize such an operation. One possible change is to allow the concurrent access of multiple processors to the same priority queue. The second possible change is to allow batch operations that work on elements, instead of just one element. For example, extractMin will remove the first elements with the highest priority.

Concurrent parallel access

If the priority queue allows concurrent access, multiple processes can perform operations concurrently on that priority queue. However, this raises two issues. First of all, the definition of the semantics of the individual operations is no longer obvious. For example, if two processes want to extract the element with the highest priority, should they get the same element or different ones? This restricts parallelism on the level of the program using the priority queue. In addition, because multiple processes have access to the same element, this leads to contention.

Node 3 is inserted and sets the pointer of node 2 to node 3. Immediately after that, node 2 is deleted and the pointer of node 1 is set to node 4. Now node 3 is no longer reachable. Concurrent prio queue conflict.svg
Node 3 is inserted and sets the pointer of node 2 to node 3. Immediately after that, node 2 is deleted and the pointer of node 1 is set to node 4. Now node 3 is no longer reachable.

The concurrent access to a priority queue can be implemented on a Concurrent Read, Concurrent Write (CRCW) PRAM model. In the following the priority queue is implemented as a skip list. [25] [26] In addition, an atomic synchronization primitive, CAS, is used to make the skip list lock-free. The nodes of the skip list consists of a unique key, a priority, an array of pointers, for each level, to the next nodes and a delete mark. The delete mark marks if the node is about to be deleted by a process. This ensures that other processes can react to the deletion appropriately.

If the concurrent access to a priority queue is allowed, conflicts may arise between two processes. For example, a conflict arises if one process is trying to insert a new node, but at the same time another process is about to delete the predecessor of that node. [25] There is a risk that the new node is added to the skip list, yet it is not longer reachable. (See image)

K-element operations

In this setting, operations on a priority queue is generalized to a batch of elements. For instance, k_extract-min deletes the smallest elements of the priority queue and returns those.

In a shared-memory setting, the parallel priority queue can be easily implemented using parallel binary search trees and join-based tree algorithms. In particular, k_extract-min corresponds to a split on the binary search tree that has cost and yields a tree that contains the smallest elements. k_insert can be applied by a union of the original priority queue and the batch of insertions. If the batch is already sorted by the key, k_insert has cost. Otherwise, we need to first sort the batch, so the cost will be . Other operations for priority queue can be applied similarly. For instance, k_decrease-key can be done by first applying difference and then union, which first deletes the elements and then inserts them back with the updated keys. All these operations are highly parallel, and the theoretical and practical efficiency can be found in related research papers. [27] [28]

The rest of this section discusses a queue-based algorithm on distributed memory. We assume each processor has its own local memory and a local (sequential) priority queue. The elements of the global (parallel) priority queue are distributed across all processors.

k_extract-min is executed on a priority queue with three processors. The green elements are returned and removed from the priority queue. BulkDeletionPQ.svg
k_extract-min is executed on a priority queue with three processors. The green elements are returned and removed from the priority queue.

A k_insert operation assigns the elements uniformly random to the processors which insert the elements into their local queues. Note that single elements can still be inserted into the queue. Using this strategy the global smallest elements are in the union of the local smallest elements of every processor with high probability. Thus each processor holds a representative part of the global priority queue.

This property is used when k_extract-min is executed, as the smallest elements of each local queue are removed and collected in a result set. The elements in the result set are still associated with their original processor. The number of elements that is removed from each local queue depends on and the number of processors . [29] By parallel selection the smallest elements of the result set are determined. With high probability these are the global smallest elements. If not, elements are again removed from each local queue and put into the result set. This is done until the global smallest elements are in the result set. Now these elements can be returned. All other elements of the result set are inserted back into their local queues. The running time of k_extract-min is expected , where and is the size of the priority queue. [29]

The priority queue can be further improved by not moving the remaining elements of the result set directly back into the local queues after a k_extract-min operation. This saves moving elements back and forth all the time between the result set and the local queues.

By removing several elements at once a considerable speedup can be reached. But not all algorithms can use this kind of priority queue. Dijkstra's algorithm for example can not work on several nodes at once. The algorithm takes the node with the smallest distance from the priority queue and calculates new distances for all its neighbor nodes. If you would take out nodes, working at one node could change the distance of another one of the nodes. So using k-element operations destroys the label setting property of Dijkstra's algorithm.

See also

Related Research Articles

<span class="mw-page-title-main">Heapsort</span> A sorting algorithm which uses the heap data structure

In computer science, heapsort is a comparison-based sorting algorithm which can be thought of as "an implementation of selection sort using the right data structure." Like selection sort, heapsort divides its input into a sorted and an unsorted region, and it iteratively shrinks the unsorted region by extracting the largest element from it and inserting it into the sorted region. Unlike selection sort, heapsort does not waste time with a linear-time scan of the unsorted region; rather, heap sort maintains the unsorted region in a heap data structure to efficiently find the largest element in each step.

<span class="mw-page-title-main">Heap (data structure)</span> Computer science data structure

In computer science, a heap is a tree-based data structure that satisfies the heap property: In a max heap, for any given node C, if P is a parent node of C, then the key of P is greater than or equal to the key of C. In a min heap, the key of P is less than or equal to the key of C. The node at the "top" of the heap is called the root node.

<span class="mw-page-title-main">Binary heap</span> Variant of heap data structure

A binary heap is a heap data structure that takes the form of a binary tree. Binary heaps are a common way of implementing priority queues. The binary heap was introduced by J. W. J. Williams in 1964 as a data structure for implementing heapsort.

<span class="mw-page-title-main">Smoothsort</span> Comparison-based sorting algorithm

In computer science, smoothsort is a comparison-based sorting algorithm. A variant of heapsort, it was invented and published by Edsger Dijkstra in 1981. Like heapsort, smoothsort is an in-place algorithm with an upper bound of O(n log n) operations (see big O notation), but it is not a stable sort. The advantage of smoothsort is that it comes closer to O(n) time if the input is already sorted to some degree, whereas heapsort averages O(n log n) regardless of the initial sorted state.

<span class="mw-page-title-main">Treap</span> Random search tree data structure

In computer science, the treap and the randomized binary search tree are two closely related forms of binary search tree data structures that maintain a dynamic set of ordered keys and allow binary searches among the keys. After any sequence of insertions and deletions of keys, the shape of the tree is a random variable with the same probability distribution as a random binary tree; in particular, with high probability its height is proportional to the logarithm of the number of keys, so that each search, insertion, or deletion operation takes logarithmic time to perform.

In computer science, a binomial heap is a data structure that acts as a priority queue. It is an example of a mergeable heap, as it supports merging two heaps in logarithmic time. It is implemented as a heap similar to a binary heap but using a special tree structure that is different from the complete binary trees used by binary heaps. Binomial heaps were invented in 1978 by Jean Vuillemin.

In computer science, a Fibonacci heap is a data structure for priority queue operations, consisting of a collection of heap-ordered trees. It has a better amortized running time than many other priority queue data structures including the binary heap and binomial heap. Michael L. Fredman and Robert E. Tarjan developed Fibonacci heaps in 1984 and published them in a scientific journal in 1987. Fibonacci heaps are named after the Fibonacci numbers, which are used in their running time analysis.

In computer science, a skip list is a probabilistic data structure that allows average complexity for search as well as average complexity for insertion within an ordered sequence of elements. Thus it can get the best features of a sorted array while maintaining a linked list-like structure that allows insertion, which is not possible with a static array. Fast search is made possible by maintaining a linked hierarchy of subsequences, with each successive subsequence skipping over fewer elements than the previous one. Searching starts in the sparsest subsequence until two consecutive elements have been found, one smaller and one larger than or equal to the element searched for. Via the linked hierarchy, these two elements link to elements of the next sparsest subsequence, where searching is continued until finally searching in the full sequence. The elements that are skipped over may be chosen probabilistically or deterministically, with the former being more common.

In computer science, a soft heap is a variant on the simple heap data structure that has constant amortized time complexity for 5 types of operations. This is achieved by carefully "corrupting" (increasing) the keys of at most a constant number of values in the heap.

A pairing heap is a type of heap data structure with relatively simple implementation and excellent practical amortized performance, introduced by Michael Fredman, Robert Sedgewick, Daniel Sleator, and Robert Tarjan in 1986. Pairing heaps are heap-ordered multiway tree structures, and can be considered simplified Fibonacci heaps. They are considered a "robust choice" for implementing such algorithms as Prim's MST algorithm, and support the following operations :

In computer science, a double-ended priority queue (DEPQ) or double-ended heap is a data structure similar to a priority queue or heap, but allows for efficient removal of both the maximum and minimum, according to some ordering on the keys (items) stored in the structure. Every element in a DEPQ has a priority or value. In a DEPQ, it is possible to remove the elements in both ascending as well as descending order.

In computer science, a min-max heap is a complete binary tree data structure which combines the usefulness of both a min-heap and a max-heap, that is, it provides constant time retrieval and logarithmic time removal of both the minimum and maximum elements in it. This makes the min-max heap a very useful data structure to implement a double-ended priority queue. Like binary min-heaps and max-heaps, min-max heaps support logarithmic insertion and deletion and can be built in linear time. Min-max heaps are often represented implicitly in an array; hence it's referred to as an implicit data structure.

In computer science, a skew binomial heap is a data structure for priority queue operations. It is a variant of the binomial heap that supports constant-time insertion operations in the worst case, rather than amortized time.

<span class="mw-page-title-main">Queap</span>

In computer science, a queap is a priority queue data structure. The data structure allows insertions and deletions of arbitrary elements, as well as retrieval of the highest-priority element. Each deletion takes amortized time logarithmic in the number of items that have been in the structure for a longer time than the removed item. Insertions take constant amortized time.

<span class="mw-page-title-main">Kinetic heap</span>

A Kinetic Heap is a kinetic data structure, obtained by the kinetization of a heap. It is designed to store elements where the priority is changing as a continuous function of time. As a type of kinetic priority queue, it maintains the maximum priority element stored in it. The kinetic heap data structure works by storing the elements as a tree that satisfies the following heap property – if B is a child node of A, then the priority of the element in A must be higher than the priority of the element in B. This heap property is enforced using certificates along every edge so, like other kinetic data structures, a kinetic heap also contains a priority queue to maintain certificate failure times.

In computer science, a shadow heap is a mergeable heap data structure which supports efficient heap merging in the amortized sense. More specifically, shadow heaps make use of the shadow merge algorithm to achieve insertion in O(f(n)) amortized time and deletion in O((log n log log n)/f(n)) amortized time, for any choice of 1 ≤ f(n) ≤ log log n.

In computer science, k-way merge algorithms or multiway merges are a specific type of sequence merge algorithms that specialize in taking in k sorted lists and merging them into a single sorted list. These merge algorithms generally refer to merge algorithms that take in a number of sorted lists greater than two. Two-way merges are also referred to as binary merges. The k-way merge is also an external sorting algorithm.

<span class="mw-page-title-main">Bucket queue</span> Data structure for integer priorities

A bucket queue is a data structure that implements the priority queue abstract data type: it maintains a dynamic collection of elements with numerical priorities and allows quick access to the element with minimum priority. In the bucket queue, the priorities must be integers, and it is particularly suited to applications in which the priorities have a small range. A bucket queue has the form of an array of buckets: an array data structure, indexed by the priorities, whose cells contain collections of items with the same priority as each other. With this data structure, insertion of elements and changes of their priority take constant time. Searching for and removing the minimum-priority element takes time proportional to the number of buckets or, by maintaining a pointer to the most recently found bucket, in time proportional to the difference in priorities between successive operations.

<span class="mw-page-title-main">K-D heap</span>

A K-D heap is a data structure in computer science which implements a multidimensional priority queue without requiring additional space. It is a generalization of the Heap. It allows for efficient insertion, query of the minimum element, and deletion of the minimum element in any of the k dimensions, and therefore includes the double-ended heap as a special case.

This is a comparison of the performance of notable data structures, as measured by the complexity of their logical operations. For a more comprehensive listing of data structures, see List of data structures.

References

  1. Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001) [1990]. "Chapter 20: Fibonacci Heaps". Introduction to Algorithms (2nd ed.). MIT Press and McGraw-Hill. pp. 476–497. ISBN   0-262-03293-7. Third edition, p. 518.
  2. Skiena, Steven (2010). The Algorithm Design Manual (2nd ed.). Springer Science+Business Media. ISBN   978-1-849-96720-4.
  3. P. van Emde Boas. Preserving order in a forest in less than logarithmic time. In Proceedings of the 16th Annual Symposium on Foundations of Computer Science, pages 75-84. IEEE Computer Society, 1975.
  4. Michael L. Fredman and Dan E. Willard. Surpassing the information theoretic bound with fusion trees. Journal of Computer and System Sciences, 48(3):533-551, 1994
  5. 1 2 3 4 Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L. (1990). Introduction to Algorithms (1st ed.). MIT Press and McGraw-Hill. ISBN   0-262-03141-8.
  6. 1 2 3 Sleator, Daniel Dominic; Tarjan, Robert Endre (February 1986). "Self-Adjusting Heaps". SIAM Journal on Computing . 15 (1): 52–69. CiteSeerX   10.1.1.93.6678 . doi:10.1137/0215004. ISSN   0097-5397.
  7. 1 2 Tarjan, Robert (1983). "3.3. Leftist heaps". Data Structures and Network Algorithms. pp. 38–42. doi:10.1137/1.9781611970265. ISBN   978-0-89871-187-5.
  8. Hayward, Ryan; McDiarmid, Colin (1991). "Average Case Analysis of Heap Building by Repeated Insertion" (PDF). J. Algorithms. 12: 126–153. CiteSeerX   10.1.1.353.7888 . doi:10.1016/0196-6774(91)90027-v. Archived from the original (PDF) on 2016-02-05. Retrieved 2016-01-28.
  9. "Binomial Heap | Brilliant Math & Science Wiki". brilliant.org. Retrieved 2019-09-30.
  10. 1 2 Brodal, Gerth Stølting; Okasaki, Chris (November 1996), "Optimal purely functional priority queues", Journal of Functional Programming, 6 (6): 839–857, doi: 10.1017/s095679680000201x
  11. Okasaki, Chris (1998). "10.2. Structural Abstraction". Purely Functional Data Structures (1st ed.). pp. 158–162. ISBN   9780521631242.
  12. Takaoka, Tadao (1999), Theory of 2–3 Heaps (PDF), p. 12
  13. Iacono, John (2000), "Improved upper bounds for pairing heaps", Proc. 7th Scandinavian Workshop on Algorithm Theory (PDF), Lecture Notes in Computer Science, vol. 1851, Springer-Verlag, pp. 63–77, arXiv: 1110.4428 , CiteSeerX   10.1.1.748.7812 , doi:10.1007/3-540-44985-X_5, ISBN   3-540-67690-2
  14. Fredman, Michael Lawrence (July 1999). "On the Efficiency of Pairing Heaps and Related Data Structures" (PDF). Journal of the Association for Computing Machinery . 46 (4): 473–501. doi:10.1145/320211.320214.
  15. Pettie, Seth (2005). Towards a Final Analysis of Pairing Heaps (PDF). FOCS '05 Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science. pp. 174–183. CiteSeerX   10.1.1.549.471 . doi:10.1109/SFCS.2005.75. ISBN   0-7695-2468-0.
  16. Haeupler, Bernhard; Sen, Siddhartha; Tarjan, Robert E. (November 2011). "Rank-pairing heaps" (PDF). SIAM J. Computing. 40 (6): 1463–1485. doi:10.1137/100785351.
  17. Fredman, Michael Lawrence; Tarjan, Robert E. (July 1987). "Fibonacci heaps and their uses in improved network optimization algorithms" (PDF). Journal of the Association for Computing Machinery . 34 (3): 596–615. CiteSeerX   10.1.1.309.8927 . doi:10.1145/28869.28874.
  18. Brodal, Gerth Stølting; Lagogiannis, George; Tarjan, Robert E. (2012). Strict Fibonacci heaps (PDF). Proceedings of the 44th symposium on Theory of Computing - STOC '12. pp. 1177–1184. CiteSeerX   10.1.1.233.1740 . doi:10.1145/2213977.2214082. ISBN   978-1-4503-1245-5.
  19. Brodal, Gerth S. (1996), "Worst-Case Efficient Priority Queues" (PDF), Proc. 7th Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 52–58
  20. Goodrich, Michael T.; Tamassia, Roberto (2004). "7.3.6. Bottom-Up Heap Construction". Data Structures and Algorithms in Java (3rd ed.). pp. 338–341. ISBN   0-471-46983-1.
  21. Thorup, Mikkel (2007). "Equivalence between priority queues and sorting". Journal of the ACM . 54 (6): 28. doi:10.1145/1314690.1314692. S2CID   11494634.
  22. "Archived copy" (PDF). Archived (PDF) from the original on 2011-07-20. Retrieved 2011-02-10.{{cite web}}: CS1 maint: archived copy as title (link)
  23. Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2009) [1990]. Introduction to Algorithms (3rd ed.). MIT Press and McGraw-Hill. p. 634. ISBN   0-262-03384-4. "In order to implement Prim's algorithm efficiently, we need a fast way to select a new edge to add to the tree formed by the edges in A."
  24. "Prim's Algorithm". Geek for Geeks. 18 November 2012. Archived from the original on 9 September 2014. Retrieved 12 September 2014.
  25. 1 2 Sundell, Håkan; Tsigas, Philippas (2005). "Fast and lock-free concurrent priority queues for multi-thread systems". Journal of Parallel and Distributed Computing. 65 (5): 609–627. CiteSeerX   10.1.1.67.1310 . doi:10.1109/IPDPS.2003.1213189. S2CID   20995116.
  26. Lindén, Jonsson (2013), "A Skiplist-Based Concurrent Priority Queue with Minimal Memory Contention", Technical Report 2018-003 (in German)
  27. Blelloch, Guy E.; Ferizovic, Daniel; Sun, Yihan (2016), "Just Join for Parallel Ordered Sets", Symposium on Parallel Algorithms and Architectures, Proc. of 28th ACM Symp. Parallel Algorithms and Architectures (SPAA 2016), ACM, pp. 253–264, arXiv: 1602.02120 , doi:10.1145/2935764.2935768, ISBN   978-1-4503-4210-0, S2CID   2897793
  28. Blelloch, Guy E.; Ferizovic, Daniel; Sun, Yihan (2018), "PAM: parallel augmented maps", Proceedings of the 23rd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, ACM, pp. 290–304
  29. 1 2 Sanders, Peter; Mehlhorn, Kurt; Dietzfelbinger, Martin; Dementiev, Roman (2019). Sequential and Parallel Algorithms and Data Structures - The Basic Toolbox. Springer International Publishing. pp. 226–229. doi:10.1007/978-3-030-25209-0. ISBN   978-3-030-25208-3. S2CID   201692657.

Further reading