Contraction hierarchies

Last updated

In computer science, the method of contraction hierarchies is a speed-up technique for finding the shortest-path in a graph. The most intuitive applications are car-navigation systems: a user wants to drive from to using the quickest possible route. The metric optimized here is the travel time. Intersections are represented by vertices, the road sections connecting them by edges. The edge weights represent the time it takes to drive along this segment of the road. A path from to is a sequence of edges (road sections); the shortest path is the one with the minimal sum of edge weights among all possible paths. The shortest path in a graph can be computed using Dijkstra's algorithm but, given that road networks consist of tens of millions of vertices, this is impractical. [1] Contraction hierarchies is a speed-up method optimized to exploit properties of graphs representing road networks. [2] The speed-up is achieved by creating shortcuts in a preprocessing phase which are then used during a shortest-path query to skip over "unimportant" vertices. [2] This is based on the observation that road networks are highly hierarchical. Some intersections, for example highway junctions, are "more important" and higher up in the hierarchy than for example a junction leading into a dead end. Shortcuts can be used to save the precomputed distance between two important junctions such that the algorithm doesn't have to consider the full path between these junctions at query time. Contraction hierarchies do not know about which roads humans consider "important" (e.g. highways), but they are provided with the graph as input and are able to assign importance to vertices using heuristics.

Contents

Contraction hierarchies are not only applied to speed-up algorithms in car-navigation systems but also in web-based route planners, traffic simulation, and logistics optimization. [3] [1] [4] Implementations of the algorithm are publicly available as open source software. [5] [6] [7] [8] [9]

Algorithm

The contraction hierarchies (CH) algorithm is a two-phase approach to the shortest path problem consisting of a preprocessing phase and a query phase. As road networks change rather infrequently, more time (seconds to hours) can be used to once precompute some calculations before queries are to be answered. Using this precomputed data, many queries can be answered taking very little time (microseconds) each. [1] [3] CHs rely on shortcuts to achieve this speedup. A shortcut connects two vertices and not adjacent in the original graph. Its edge weight is the sum of the edge weights on the shortest - path.

Consider two large cities connected by a highway. Between these two cities, there is a multitude of junctions leading to small villages and suburbs. Most drivers want to get from one city to the other – maybe as part of a larger route – and not take one of the exits on the way. In the graph representing this road layout, each intersection is represented by a node and edges are created between neighboring intersections. To calculate the distance between these two cities, the algorithm has to traverse all the edges along the way, adding up their length. Precomputing this distance once and storing it in an additional edge created between the two large cities will save calculations each time this highway has to be evaluated in a query. This additional edge is called a "shortcut" and has no counterpart in the real world. The contraction hierarchies algorithm has no knowledge about road types but is able to determine which shortcuts have to be created using the graph alone as input.

To find a path from
s
{\displaystyle s}
to
t
{\displaystyle t}
the algorithm can skip over the grey vertices and use the dashed shortcut instead. This reduces the number of vertices the algorithm has to look at. The edge weight of the shortcut from
u
{\displaystyle u}
to
v
{\displaystyle v}
is the sum of the edge weights of the shortest
u
{\displaystyle u}
-
v
{\displaystyle v}
path. Shortcut in a shortest path.svg
To find a path from to the algorithm can skip over the grey vertices and use the dashed shortcut instead. This reduces the number of vertices the algorithm has to look at. The edge weight of the shortcut from to is the sum of the edge weights of the shortest - path.

Preprocessing phase

The CH algorithm relies on shortcuts created in the preprocessing phase to reduce the search space – that is the number of vertices CH has to look at, at query time. To achieve this, iterative vertex contractions are performed. When contracting a vertex it is temporarily removed from the graph , and a shortcut is created between each pair of neighboring vertices if the shortest path from to contains . [2] The process of determining if the shortest path between and contains is called witness search. It can be performed for example by computing a path from to using a forward search using only not yet contracted nodes. [3]

The original graph is the line
(
a
,
b
,
c
,
d
,
e
,
f
)
{\displaystyle (a,b,c,d,e,f)}
(solid). Dashed edges represent shortcuts, grey arrows show which two edges are combined to form the respective shortcut. Vertices have been drawn to represent the node order in which the vertices are being contracted, bottom-to-top. Contracting vertex
c
{\displaystyle c}
introduces a shortcut between
b
{\displaystyle b}
and
d
{\displaystyle d}
with
d
i
s
t
(
b
,
d
)
=
d
i
s
t
(
b
,
c
)
+
d
i
s
t
(
c
,
d
)
{\displaystyle \mathrm {dist} (b,d)=\mathrm {dist} (b,c)+\mathrm {dist} (c,d)}
. Contractions of the vertices
e
{\displaystyle e}
and
d
{\displaystyle d}
introduce one shortcut respectively. Contractions of
a
{\displaystyle a}
,
b
{\displaystyle b}
and
f
{\displaystyle f}
do not introduce any shortcuts and are therefore not shown. Iterated contractions on line graph.gif
The original graph is the line (solid). Dashed edges represent shortcuts, grey arrows show which two edges are combined to form the respective shortcut. Vertices have been drawn to represent the node order in which the vertices are being contracted, bottom-to-top. Contracting vertex introduces a shortcut between and with . Contractions of the vertices and introduce one shortcut respectively. Contractions of , and do not introduce any shortcuts and are therefore not shown.

Node order

The vertices of the input graph have to be contracted in a way which minimizes the number of edges added to the graph by contractions. As optimal node ordering is NP-complete, [10] heuristics are used. [2]

Bottom-up and top-down heuristics exist. On one hand, the computationally cheaper bottom-up heuristics decide the order in which to contract the vertices in a greedy fashion; this means the order is not known in advance but rather the next node is selected for contraction after the previous contraction has been completed. Top-down heuristics on the other hand precompute the whole node ordering before the first node is contracted. This yields better results but needs more preprocessing time. [2]

In bottom-up heuristics, a combination of factors is used to select the next vertex for contraction. As the number of shortcuts is the primary factor that determines preprocessing and query runtime, we want to keep it as small as possible. The most important term by which to select the next node for contraction is therefore the net number of edges added when contracting a node . This is defined as where is the number of shortcuts that would be created if were to be contracted and is the number of edges incident to . Using this criterion alone, a linear path would result in a linear hierarchy (many levels) and no created shortcuts. By considering the number of nearby vertices that are already contracted, a uniform contraction and a flat hierarchy (less levels) is achieved. This can, for example, be done by maintaining a counter for each node that is incremented each time a neighboring vertex is contracted. Nodes with lower counters are then preferred to nodes with higher counters. [11]

Top-down heuristics, on the other hand, yield better results but need more preprocessing time. They classify vertices that are part of many shortest paths as more important than those that are only needed for a few shortest paths. This can be approximated using nested dissections. [2] To compute a nested dissection, one recursively separates a graph into two parts, which are themselves then separated into two parts and so on. That is, find a subset of nodes which when removed from the graph separate into two disjunct pieces of approximately equal size such that . Place all nodes last in the node ordering and then recursively compute the nested dissection for and , [12] the intuition being that all queries from one half of the graph to the other half of the graph need to pass through the small separator and therefore nodes in this separator are of high importance. Nested dissections can be efficiently calculated on road networks because of their small separators. [13]

Query phase

In the query phase, a bidirectional search is performed starting from the starting node and the target node on the original graph augmented by the shortcuts created in the preprocessing phase. [2] The most important vertex on the shortest path between and will be either or themselves or more important than both and . Therefore, the vertex minimizing is on the shortest path in the original graph and holds. [2] This, in combination with how shortcuts are created, means that both forward and backward search only need to relax edges leading to more important nodes (upwards) in the hierarchy which keeps the search space small. [3] In all up-(down-up)-down paths, the inner (down-up) can be skipped, because a shortcut has been created in the preprocessing stage.

When computing the shortest path from
s
{\displaystyle s}
to
t
{\displaystyle t}
, forward (orange) and backward (blue) search only need to follow edges going upwards in the hierarchy. The found path marked in red and uses one shortcut (dashed). Search space of CH.svg
When computing the shortest path from to , forward (orange) and backward (blue) search only need to follow edges going upwards in the hierarchy. The found path marked in red and uses one shortcut (dashed).

Path retrieval

A CH query, as described above, yields the time or distance from to but not the actual path. To obtain the list of edges (roads) on the shortest path, the shortcuts taken have to be unpacked. Each shortcut is the concatenation of two edges: either two edges of the original graph, or two shortcuts, or one original edge and one shortcut. Storing the middle vertex of each shortcut during contraction enables linear-time recursive unpacking of the shortest route. [2] [3]

Customized contraction hierarchies

If the edge weights are changed more often than the network topology, CH can be extended to a three-phase approach by including a customization phase between the preprocessing and query phase. This can be used for example to switch between shortest distance and shortest time or include current traffic information as well as user preferences like avoiding certain types of roads (ferries, highways, ...). In the preprocessing phase, most of the runtime is spent on computing the order in which the nodes are contracted. [3] This sequence of contraction operations in the preprocessing phase can be saved for when they are later needed in the customization phase. Each time the metric is customized, the contractions can then be efficiently applied in the stored order using the custom metric. [2] Additionally, depending on the new edge weights it may be necessary to recompute some shortcuts. [3] For this to work, the contraction order has to be computed using metric-independent nested dissections. [1]

Extensions and applications

CHs as described above search for a shortest path from one starting node to one target node. This is called one-to-one shortest path and is used for example in car-navigation systems. Other applications include matching GPS traces to road segments and speeding up traffic simulators which have to consider the likely routes taken by all drivers in a network. In route prediction one tries to estimate where a vehicle is likely headed by calculating how well its current and past positions agree with a shortest path from its starting point to any possible target. This can be efficiently done using CHs. [2]

In one-to-many scenarios, a starting node and a set of target nodes are given and the distance for all has to be computed. The most prominent application for one-to-many queries are point-of-interest searches. Typical examples include finding the closest gas station, restaurant or post office using actual travel time instead of geographical distance as metric. [2]

In the many-to-many shortest path scenario, a set of starting nodes and a set of target nodes are given and the distance for all has to be computed. This is used for example in logistic applications. [2] CHs can be extended to many-to-many queries in the following manner. First, perform a backward upward search from each . For each vertex scanned during this search, one stores in a bucket . Then, one runs a forward upward search from each , checking for each non-empty bucket, whether the route over the corresponding vertex improves any best distance. That is, if for any . [2] [3]

Some applications even require one-to-all computations, i.e., finding the distances from a source vertex to all other vertices in the graph. As Dijkstra's algorithm visits each edge exactly once and therefore runs in linear time it is theoretically optimal. Dijkstra's algorithm, however, is hard to parallelize and is not cache-optimal because of its bad locality. CHs can be used for a more cache-optimal implementation. For this, a forward upward search from followed by a downward scan over all nodes in the shortcut-enriched graph is performed. The later operation scans through memory in a linear fashion, as the nodes are processed in decreasing order of importance and can therefore be placed in memory accordingly. [14] Note, that this is possible because the order in which the nodes are processed in the second phase is independent of the source node . [2]

In production, car-navigation systems should be able to compute fastest travel routes using predicted traffic information and display alternative routes. Both can be done using CHs. [2] The former is called routing with time-dependent networks where the travel time of a given edge is no longer constant but rather a function of the time of day when entering the edge. Alternative routes need to be smooth-looking, significantly different from the shortest path but not significantly longer. [2]

CHs can be extended to optimize multiple metrics at the same time; this is called multi-criteria route planning. For example, one could minimize both travel cost and time. Another example are electric vehicles for which the available battery charge constrains the valid routes as the battery may not run empty. [2]

Theory

A number of bounds have been established on the preprocessing and query performance of contraction hierarchies. In the following let be the number of vertices in the graph, the number of edges, the highway dimension, the graph diameter, is the tree-depth and is the tree-width.

The first analysis of contraction hierarchy performance relies in part on a quantity known as the highway dimension . While the definition of this quantity is technical, intuitively a graph has a small highway dimension if for every there is a sparse set of vertices such that every shortest path of length greater than includes a vertex from . Calculating the exact value of the highway dimension is NP-hard [15] [16] and most likely W[1]-hard, [17] but for grids it is known that the highway dimension is . [18]

An alternative analysis was presented in the Customizable Contraction Hierarchy line of work. Query running times can be bounded by . As the tree-depth can be bounded in terms of the tree-width, is also a valid upper bound. The main source is [19] but the consequences for the worst case running times are better detailed in. [20]

Preprocessing Performance

Preprocessing Time Complexity of Contraction Hierarchies
AlgorithmYearTime Complexity
Randomized Processing [21] 2015

Query Performance

Query Time Complexity of Contraction Hierarchies
Algorithm/Analysis TechniqueYearTime ComplexityNotes
Bounded Growth Graphs [22] 2018
Customizable Contraction Hierarchies [19] [20] 2013-2018 or . is the tree-depth and is the tree-width
Randomized Processing [21] 2015Exact, no O-notation; works with high probability
Modified SHARC [18] 2010Polynomial preprocessing
Modified SHARC [18] 2010Superpolynomial preprocessing

Related Research Articles

<span class="mw-page-title-main">Travelling salesman problem</span> NP-hard problem in combinatorial optimization

The travelling salesman problem, also known as the travelling salesperson problem (TSP), asks the following question: "Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city?" It is an NP-hard problem in combinatorial optimization, important in theoretical computer science and operations research.

<span class="mw-page-title-main">Shortest path problem</span> Computational problem of graph theory

In graph theory, the shortest path problem is the problem of finding a path between two vertices in a graph such that the sum of the weights of its constituent edges is minimized.

<span class="mw-page-title-main">Dijkstra's algorithm</span> Algorithm for finding shortest paths

Dijkstra's algorithm is an algorithm for finding the shortest paths between nodes in a weighted graph, which may represent, for example, road networks. It was conceived by computer scientist Edsger W. Dijkstra in 1956 and published three years later.

<span class="mw-page-title-main">Bellman–Ford algorithm</span> Algorithm for finding the shortest paths in graphs

The Bellman–Ford algorithm is an algorithm that computes shortest paths from a single source vertex to all of the other vertices in a weighted digraph. It is slower than Dijkstra's algorithm for the same problem, but more versatile, as it is capable of handling graphs in which some of the edge weights are negative numbers. The algorithm was first proposed by Alfonso Shimbel, but is instead named after Richard Bellman and Lester Ford Jr., who published it in 1958 and 1956, respectively. Edward F. Moore also published a variation of the algorithm in 1959, and for this reason it is also sometimes called the Bellman–Ford–Moore algorithm.

In computer science, the Floyd–Warshall algorithm is an algorithm for finding shortest paths in a directed weighted graph with positive or negative edge weights. A single execution of the algorithm will find the lengths of shortest paths between all pairs of vertices. Although it does not return details of the paths themselves, it is possible to reconstruct the paths with simple modifications to the algorithm. Versions of the algorithm can also be used for finding the transitive closure of a relation , or widest paths between all pairs of vertices in a weighted graph.

<span class="mw-page-title-main">Maximum flow problem</span> Computational problem in graph theory

In optimization theory, maximum flow problems involve finding a feasible flow through a flow network that obtains the maximum possible flow rate.

In computer science, a topological sort or topological ordering of a directed graph is a linear ordering of its vertices such that for every directed edge (u,v) from vertex u to vertex v, u comes before v in the ordering. For instance, the vertices of the graph may represent tasks to be performed, and the edges may represent constraints that one task must be performed before another; in this application, a topological ordering is just a valid sequence for the tasks. Precisely, a topological sort is a graph traversal in which each node v is visited only after all its dependencies are visited. A topological ordering is possible if and only if the graph has no directed cycles, that is, if it is a directed acyclic graph (DAG). Any DAG has at least one topological ordering, and algorithms are known for constructing a topological ordering of any DAG in linear time. Topological sorting has many applications, especially in ranking problems such as feedback arc set. Topological sorting is possible even when the DAG has disconnected components.

Johnson's algorithm is a way to find the shortest paths between all pairs of vertices in an edge-weighted directed graph. It allows some of the edge weights to be negative numbers, but no negative-weight cycles may exist. It works by using the Bellman–Ford algorithm to compute a transformation of the input graph that removes all negative weights, allowing Dijkstra's algorithm to be used on the transformed graph. It is named after Donald B. Johnson, who first published the technique in 1977.

<span class="mw-page-title-main">Centrality</span> Degree of connectedness within a graph

In graph theory and network analysis, indicators of centrality assign numbers or rankings to nodes within a graph corresponding to their network position. Applications include identifying the most influential person(s) in a social network, key infrastructure nodes in the Internet or urban networks, super-spreaders of disease, and brain networks. Centrality concepts were first developed in social network analysis, and many of the terms used to measure centrality reflect their sociological origin.

<span class="mw-page-title-main">Pathfinding</span> Plotting by a computer application

Pathfinding or pathing is the search, by a computer application, for the shortest route between two points. It is a more practical variant on solving mazes. This field of research is based heavily on Dijkstra's algorithm for finding the shortest path on a weighted graph.

In graph theory, reachability refers to the ability to get from one vertex to another within a graph. A vertex can reach a vertex if there exists a sequence of adjacent vertices which starts with and ends with .

In computer science, the Hopcroft–Karp algorithm is an algorithm that takes a bipartite graph as input and produces a maximum-cardinality matching as output — a set of as many edges as possible with the property that no two edges share an endpoint. It runs in time in the worst case, where is set of edges in the graph, is set of vertices of the graph, and it is assumed that . In the case of dense graphs the time bound becomes , and for sparse random graphs it runs in time with high probability.

<span class="mw-page-title-main">Shortest-path tree</span>

In mathematics and computer science, a shortest-path tree rooted at a vertex v of a connected, undirected graph G is a spanning tree T of G, such that the path distance from root v to any other vertex u in T is the shortest path distance from v to u in G.

In graph theory, the planar separator theorem is a form of isoperimetric inequality for planar graphs, that states that any planar graph can be split into smaller pieces by removing a small number of vertices. Specifically, the removal of vertices from an n-vertex graph can partition the graph into disjoint subgraphs each of which has at most vertices.

In theoretical computer science and network routing, Suurballe's algorithm is an algorithm for finding two disjoint paths in a nonnegatively-weighted directed graph, so that both paths connect the same pair of vertices and have minimum total length. The algorithm was conceived by John W. Suurballe and published in 1974. The main idea of Suurballe's algorithm is to use Dijkstra's algorithm to find one path, to modify the weights of the graph edges, and then to run Dijkstra's algorithm a second time. The output of the algorithm is formed by combining these two paths, discarding edges that are traversed in opposite directions by the paths, and using the remaining edges to form the two paths to return as the output. The modification to the weights is similar to the weight modification in Johnson's algorithm, and preserves the non-negativity of the weights while allowing the second instance of Dijkstra's algorithm to find the correct second path.

<span class="mw-page-title-main">Stoer–Wagner algorithm</span> Recursive algorithm in graph theory

In graph theory, the Stoer–Wagner algorithm is a recursive algorithm to solve the minimum cut problem in undirected weighted graphs with non-negative weights. It was proposed by Mechthild Stoer and Frank Wagner in 1995. The essential idea of this algorithm is to shrink the graph by merging the most intensive vertices, until the graph only contains two combined vertex sets. At each phase, the algorithm finds the minimum - cut for two vertices and chosen at its will. Then the algorithm shrinks the edge between and to search for non - cuts. The minimum cut found in all phases will be the minimum weighted cut of the graph.

In applied mathematics, transit node routing can be used to speed up shortest-path routing by pre-computing connections between common access nodes to a sub-network relevant to long-distance travel.

A central problem in algorithmic graph theory is the shortest path problem. One of the generalizations of the shortest path problem is known as the single-source-shortest-paths (SSSP) problem, which consists of finding the shortest paths from a source vertex to all other vertices in the graph. There are classical sequential algorithms which solve this problem, such as Dijkstra's algorithm. In this article, however, we present two parallel algorithms solving this problem.

The highway dimension is a graph parameter modelling transportation networks, such as road networks or public transportation networks. It was first formally defined by Abraham et al. based on the observation by Bast et al. that any road network has a sparse set of "transit nodes", such that driving from a point A to a sufficiently far away point B along the shortest route will always pass through one of these transit nodes. It has also been proposed that the highway dimension captures the properties of public transportation networks well, given that longer routes using busses, trains, or airplanes will typically be serviced by larger transit hubs. This relates to the spoke–hub distribution paradigm in transport topology optimization.

<span class="mw-page-title-main">Brandes' algorithm</span> Algorithm for finding important nodes in a graph

In network theory, Brandes' algorithm is an algorithm for calculating the betweenness centrality of vertices in a graph. The algorithm was first published in 2001 by Ulrik Brandes. Betweenness centrality, along with other measures of centrality, is an important measure in many real-world networks, such as social networks and computer networks.

References

  1. 1 2 3 4 Dibbelt, Julian; Strasser, Ben; Wagner, Dorothea (5 April 2016). "Customizable Contraction Hierarchies". Journal of Experimental Algorithmics. 21 (1): 1–49. arXiv: 1402.0402 . doi:10.1145/2886843. S2CID   5247950.
  2. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Bast, Hannah; Delling, Daniel; Goldberg, Andrew V.; Müller-Hannemann, Matthias; Pajor, Thomas; Sanders, Peter; Wagner, Dorothea; Werneck, Renato F. (2016). "Route Planning in Transportation Networks". Algorithm Engineering. Lecture Notes in Computer Science. Vol. 9220. pp. 19–80. arXiv: 1504.05140 . doi:10.1007/978-3-319-49487-6_2. ISBN   978-3-319-49486-9. S2CID   14384915.
  3. 1 2 3 4 5 6 7 8 Geisberger, Robert; Sanders, Peter; Schultes, Dominik; Vetter, Christian (2012). "Exact Routing in Large Road Networks Using Contraction Hierarchies". Transportation Science. 46 (3): 388–404. doi:10.1287/trsc.1110.0401.
  4. Delling, Daniel; Sanders, Peter; Schultes, Dominik; Wagner, Dorothea (2009). "Engineering Route Planning Algorithms". Algorithmics of Large and Complex Networks. Lecture Notes in Computer Science. Vol. 5515. pp. 117–139. doi:10.1007/978-3-642-02094-0_7. ISBN   978-3-642-02093-3.
  5. "OSRM – Open Source Routing Machine".
  6. "Wiki – OpenTripPlanner".
  7. "Web – GraphHopper".
  8. "GitHub – Tempus". GitHub . 9 September 2021.
  9. "GitHub – RoutingKit". GitHub . 24 January 2022.
  10. Bauer, Reinhard; Delling, Daniel; Sanders, Peter; Schieferdecker, Dennis; Schultes, Dominik; Wagner, Dorothea (2010-03-01). "Combining hierarchical and goal-directed speed-up techniques for dijkstra's algorithm". Journal of Experimental Algorithmics. 15: 2.1. doi:10.1145/1671970.1671976. ISSN   1084-6654. S2CID   1661292.
  11. Geisberger, Robert; Sanders, Peter; Schultes, Dominik; Delling, Daniel (2008). "Contraction Hierarchies: Faster and Simpler Hierarchical Routing in Road Networks". In McGeoch, Catherine C. (ed.). Experimental Algorithms. Lecture Notes in Computer Science. Vol. 5038. Springer Berlin Heidelberg. pp. 319–333. doi:10.1007/978-3-540-68552-4_24. ISBN   9783540685524. S2CID   777101.
  12. Bauer, Reinhard; Columbus, Tobias; Rutter, Ignaz; Wagner, Dorothea (2016-09-13). "Search-space size in contraction hierarchies". Theoretical Computer Science. 645: 112–127. doi: 10.1016/j.tcs.2016.07.003 . ISSN   0304-3975.
  13. Delling, Daniel; Goldberg, Andrew V.; Razenshteyn, Ilya; Werneck, Renato F. (May 2011). "Graph Partitioning with Natural Cuts". 2011 IEEE International Parallel & Distributed Processing Symposium. pp. 1135–1146. CiteSeerX   10.1.1.385.1580 . doi:10.1109/ipdps.2011.108. ISBN   978-1-61284-372-8. S2CID   6884123.
  14. Delling, Daniel; Goldberg, Andrew V.; Nowatzyk, Andreas; Werneck, Renato F. (2011). "PHAST: Hardware-Accelerated Shortest Path Trees". 2011 IEEE International Parallel & Distributed Processing Symposium. pp. 921–931. doi:10.1109/ipdps.2011.89. ISBN   978-1-61284-372-8. S2CID   1419921.
  15. Feldmann, Andreas Emil; Fung, Wai Shing; Könemann, Jochen; Post, Ian (2018-01-01). "A $(1+\varepsilon)$-Embedding of Low Highway Dimension Graphs into Bounded Treewidth Graphs". SIAM Journal on Computing. 47 (4): 1667–1704. arXiv: 1502.04588 . doi:10.1137/16M1067196. ISSN   0097-5397. S2CID   11339698.
  16. Blum, Johannes (2019). "Hierarchy of Transportation Network Parameters and Hardness Results". In Jansen, Bart M. P.; Telle, Jan Arne (eds.). 14th International Symposium on Parameterized and Exact Computation (IPEC 2019). Leibniz International Proceedings in Informatics. Vol. 148. Dagstuhl, Germany: Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik. pp. 4:1–4:15. doi: 10.4230/LIPIcs.IPEC.2019.4 . ISBN   978-3-95977-129-0. S2CID   166228480.
  17. Blum, Johannes; Disser, Yann; Feldmann, Andreas Emil; Gupta, Siddharth; Zych-Pawlewicz, Anna (2022). "On Sparse Hitting Sets: From Fair Vertex Cover to Highway Dimension". In Dell, Holger; Nederlof, Jesper (eds.). 17th International Symposium on Parameterized and Exact Computation (IPEC 2022). Leibniz International Proceedings in Informatics. Vol. 249. Dagstuhl, Germany: Schloss Dagstuhl – Leibniz-Zentrum für Informatik. pp. 5:1–5:23. doi: 10.4230/LIPIcs.IPEC.2022.5 . ISBN   978-3-95977-260-0.
  18. 1 2 3 Abraham, Ittai; Fiat, Amos; Goldberg, Andrew (2010). Highway dimension, shortest paths, and provably efficient algorithms (PDF). Proceedings of the 2010 annual ACM-SIAM symposium on discrete algorithms. doi: 10.1137/1.9781611973075.64 .
  19. 1 2 Dibbelt, Julian; Strasser, Ben; Wagner, Dorothea (2016). "Customizable Contraction Hierarchies". ACM Journal of Experimental Algorithmics. 21: 1–49. arXiv: 1402.0402 . doi:10.1145/2886843. S2CID   5247950.
  20. 1 2 Hamann, Michael; Strasser, Ben (2018). "Graph Bisection with Pareto Optimization". ACM Journal of Experimental Algorithmics. 23: 1–34. arXiv: 1504.03812 . doi:10.1145/3173045. S2CID   3395784.
  21. 1 2 Funke, Stefan; Storandt, Sabine (2015). "Provable Efficiency of Contraction Hierarchies with Randomized Preprocessing". Algorithms and Computation. Lecture Notes in Computer Science. Vol. 9472. pp. 479–490. doi:10.1007/978-3-662-48971-0_41. ISBN   978-3-662-48971-0.
  22. Blum, Johannes; Funke, Stefan; Storandt, Sabine (2018). Sublinear Search Spaces for Shortest Path Planning in Grid and Road Networks (PDF). AAAI.

Open source implementations