Parameterized approximation algorithm

Last updated

A parameterized approximation algorithm is a type of algorithm that aims to find approximate solutions to NP-hard optimization problems in polynomial time in the input size and a function of a specific parameter. These algorithms are designed to combine the best aspects of both traditional approximation algorithms and fixed-parameter tractability.

Contents

In traditional approximation algorithms, the goal is to find solutions that are at most a certain factor away from the optimal solution, known as an -approximation, in polynomial time. On the other hand, parameterized algorithms are designed to find exact solutions to problems, but with the constraint that the running time of the algorithm is polynomial in the input size and a function of a specific parameter . The parameter describes some property of the input and is small in typical applications. The problem is said to be fixed-parameter tractable (FPT) if there is an algorithm that can find the optimum solution in time, where is a function independent of the input size .

A parameterized approximation algorithm aims to find a balance between these two approaches by finding approximate solutions in FPT time: the algorithm computes an -approximation in time, where is a function independent of the input size . This approach aims to overcome the limitations of both traditional approaches by having stronger guarantees on the solution quality compared to traditional approximations while still having efficient running times as in FPT algorithms. An overview of the research area studying parameterized approximation algorithms can be found in the survey of Marx [1] and the more recent survey by Feldmann et al. [2]

Obtainable approximation ratios

The full potential of parameterized approximation algorithms is utilized when a given optimization problem is shown to admit an -approximation algorithm running in time, while in contrast the problem neither has a polynomial-time -approximation algorithm (under some complexity assumption, e.g., ), nor an FPT algorithm for the given parameter (i.e., it is at least W[1]-hard).

For example, some problems that are APX-hard and W[1]-hard admit a parameterized approximation scheme (PAS), i.e., for any a -approximation can be computed in time for some functions and . This then circumvents the lower bounds in terms of polynomial-time approximation and fixed-parameter tractability. A PAS is similar in spirit to a polynomial-time approximation scheme (PTAS) but additionally exploits a given parameter . Since the degree of the polynomial in the runtime of a PAS depends on a function , the value of is assumed to be arbitrary but constant in order for the PAS to run in FPT time. If this assumption is unsatisfying, is treated as a parameter as well to obtain an efficientparameterized approximation scheme (EPAS), which for any computes a -approximation in time for some function . This is similar in spirit to an efficient polynomial-time approximation scheme (EPTAS).

k-cut

The k-cut problem has no polynomial-time -approximation algorithm for any , assuming and the small set expansion hypothesis. [3] It is also W[1]-hard parameterized by the number of required components. [4] However an EPAS exists, which computes a -approximation in time. [5]

Steiner Tree

The Steiner tree problem is FPT parameterized by the number of terminals. [6] However, for the "dual" parameter consisting of the number of non-terminals contained in the optimum solution, the problem is W[2]-hard (due to a folklore reduction from the Dominating Set problem). Steiner Tree is also known to be APX-hard. [7] However, there is an EPAS computing a -approximation in time. [8]

Strongly-connected Steiner subgraph

It is known that the Strongly Connected Steiner Subgraph problem is W[1]-hard parameterized by the number of terminals, [9] and also does not admit an -approximation in polynomial time (under standard complexity assumptions). [10] However a 2-approximation can be computed in time. [11] Furthermore, this is best possible, since no -approximation can be computed in time for any function , under Gap-ETH. [12]

k-median and k-means

For the well-studied metric clustering problems of k-median and k-means parameterized by the number of centers, it is known that no -approximation for k-Median and no -approximation for k-Means can be computed in time for any function , under Gap-ETH. [13] Matching parameterized approximation algorithms exist, [13] but it is not known whether matching approximations can be computed in polynomial time.

Clustering is often considered in settings of low dimensional data, and thus a practically relevant parameterization is by the dimension of the underlying metric. In the Euclidean space, the k-Median and k-Means problems admit an EPAS parameterized by the dimension , [14] [15] and also an EPAS parameterized by . [16] [17] The former was generalized to an EPAS for the parameterization by the doubling dimension. [18] For the loosely related highway dimension parameter, only an approximation scheme with XP runtime is known to date. [19]

k-center

For the metric k-center problem a 2-approximation can be computed in polynomial time. However, when parameterizing by either the number of centers, [20] the doubling dimension (in fact the dimension of a Manhattan metric), [21] or the highway dimension, [20] no parameterized -approximation algorithm exists, under standard complexity assumptions. Furthermore, the k-Center problem is W[1]-hard even on planar graphs when simultaneously parameterizing it by the number of centers, the doubling dimension, the highway dimension, and the pathwidth. [22] However, when combining with the doubling dimension an EPAS exists, [22] and the same is true when combining with the highway dimension. [23] For the more general version with vertex capacities, an EPAS exists for the parameterization by k and the doubling dimension, but not when using k and the highway dimension as the parameter. [24] Regarding the pathwidth, k-Center admits an EPAS even for the more general treewidth parameter, and also for cliquewidth. [25]

Densest subgraph

An optimization variant of the k-Clique problem is the Densest k-Subgraph problem (which is a 2-ary Constraint Satisfaction problem), where the task is to find a subgraph on vertices with maximum number of edges. It is not hard to obtain a -approximation by just picking a matching of size in the given input graph, since the maximum number of edges on vertices is always at most . This is also asymptotically optimal, since under Gap-ETH no -approximation can be computed in FPT time parameterized by . [26]

Dominating set

For the Dominating set problem it is W[1]-hard to compute any -approximation in time for any functions and . [27]

Approximate kernelization

Kernelization is a technique used in fixed-parameter tractability to pre-process an instance of an NP-hard problem in order to remove "easy parts" and reveal the NP-hard core of the instance. A kernelization algorithm takes an instance and a parameter , and returns a new instance with parameter such that the size of and is bounded as a function of the input parameter , and the algorithm runs in polynomial time. An -approximate kernelization algorithm is a variation of this technique that is used in parameterized approximation algorithms. It returns a kernel such that any -approximation in can be converted into an -approximation to the input instance in polynomial time. This notion was introduced by Lokshtanov et al., [28] but there are other related notions in the literature such as Turing kernels [29] and -fidelity kernelization. [30]

As for regular (non-approximate) kernels, a problem admits an α-approximate kernelization algorithm if and only if it has a parameterized α-approximation algorithm. The proof of this fact is very similar to the one for regular kernels. [28] However the guaranteed approximate kernel might be of exponential size (or worse) in the input parameter. Hence it becomes interesting to find problems that admit polynomial sized approximate kernels. Furthermore, a polynomial-sized approximate kernelization scheme (PSAKS) is an -approximate kernelization algorithm that computes a polynomial-sized kernel and for which can be set to for any .

For example, while the Connected Vertex Cover problem is FPT parameterized by the solution size, it does not admit a (regular) polynomial sized kernel (unless ), but a PSAKS exists. [28] Similarly, the Steiner Tree problem is FPT parameterized by the number of terminals, does not admit a polynomial sized kernel (unless ), but a PSAKS exists. [28] When parameterizing Steiner Tree by the number of non-terminals in the optimum solution, the problem is W[2]-hard (and thus admits no exact kernel at all, unless FPT=W[2]), but still admits a PSAKS. [8]

Talks on parameterized approximations

Related Research Articles

The bin packing problem is an optimization problem, in which items of different sizes must be packed into a finite number of bins or containers, each of a fixed given capacity, in a way that minimizes the number of bins used. The problem has many applications, such as filling up containers, loading trucks with weight capacity constraints, creating file backups in media, and technology mapping in FPGA semiconductor chip design.

<span class="mw-page-title-main">Time complexity</span> Estimate of time taken for running an algorithm

In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor.

<span class="mw-page-title-main">Steiner tree problem</span> On short connecting networks with added vertices

In combinatorial mathematics, the Steiner tree problem, or minimum Steiner tree problem, named after Jakob Steiner, is an umbrella term for a class of problems in combinatorial optimization. While Steiner tree problems may be formulated in a number of settings, they all require an optimal interconnect for a given set of objects and a predefined objective function. One well-known variant, which is often used synonymously with the term Steiner tree problem, is the Steiner tree problem in graphs. Given an undirected graph with non-negative edge weights and a subset of vertices, usually referred to as terminals, the Steiner tree problem in graphs requires a tree of minimum weight that contains all terminals and minimizes the total weight of its edges. Further well-known variants are the Euclidean Steiner tree problem and the rectilinear minimum Steiner tree problem.

In computer science and operations research, approximation algorithms are efficient algorithms that find approximate solutions to optimization problems with provable guarantees on the distance of the returned solution to the optimal one. Approximation algorithms naturally arise in the field of theoretical computer science as a consequence of the widely believed P ≠ NP conjecture. Under this conjecture, a wide class of optimization problems cannot be solved exactly in polynomial time. The field of approximation algorithms, therefore, tries to understand how closely it is possible to approximate optimal solutions to such problems in polynomial time. In an overwhelming majority of the cases, the guarantee of such algorithms is a multiplicative one expressed as an approximation ratio or approximation factor i.e., the optimal solution is always guaranteed to be within a (predetermined) multiplicative factor of the returned solution. However, there are also many approximation algorithms that provide an additive guarantee on the quality of the returned solution. A notable example of an approximation algorithm that provides both is the classic approximation algorithm of Lenstra, Shmoys and Tardos for scheduling on unrelated parallel machines.

In computer science, parameterized complexity is a branch of computational complexity theory that focuses on classifying computational problems according to their inherent difficulty with respect to multiple parameters of the input or output. The complexity of a problem is then measured as a function of those parameters. This allows the classification of NP-hard problems on a finer scale than in the classical setting, where the complexity of a problem is only measured as a function of the number of bits in the input. This appears to have been first demonstrated in Gurevich, Stockmeyer & Vishkin (1984). The first systematic work on parameterized complexity was done by Downey & Fellows (1999).

In computer science, a polynomial-time approximation scheme (PTAS) is a type of approximation algorithm for optimization problems.

<span class="mw-page-title-main">Dominating set</span> Subset of a graphs nodes such that all other nodes link to at least one

In graph theory, a dominating set for a graph G is a subset D of its vertices, such that any vertex of G is either in D, or has a neighbor in D. The domination numberγ(G) is the number of vertices in a smallest dominating set for G.

<span class="mw-page-title-main">Feedback arc set</span> Edges that hit all cycles in a graph

In graph theory and graph algorithms, a feedback arc set or feedback edge set in a directed graph is a subset of the edges of the graph that contains at least one edge out of every cycle in the graph. Removing these edges from the graph breaks all of the cycles, producing an acyclic subgraph of the given graph, often called a directed acyclic graph. A feedback arc set with the fewest possible edges is a minimum feedback arc set and its removal leaves a maximum acyclic subgraph; weighted versions of these optimization problems are also used. If a feedback arc set is minimal, meaning that removing any edge from it produces a subset that is not a feedback arc set, then it has an additional property: reversing all of its edges, rather than removing them, produces a directed acyclic graph.

In computer science, a kernelization is a technique for designing efficient algorithms that achieve their efficiency by a preprocessing stage in which inputs to the algorithm are replaced by a smaller input, called a "kernel". The result of solving the problem on the kernel should either be the same as on the original input, or it should be easy to transform the output on the kernel to the desired output for the original problem.

In graph theory and theoretical computer science, the longest path problem is the problem of finding a simple path of maximum length in a given graph. A path is called simple if it does not have any repeated vertices; the length of a path may either be measured by its number of edges, or by the sum of the weights of its edges. In contrast to the shortest path problem, which can be solved in polynomial time in graphs without negative-weight cycles, the longest path problem is NP-hard and the decision version of the problem, which asks whether a path exists of at least some given length, is NP-complete. This means that the decision problem cannot be solved in polynomial time for arbitrary graphs unless P = NP. Stronger hardness results are also known showing that it is difficult to approximate. However, it has a linear time solution for directed acyclic graphs, which has important applications in finding the critical path in scheduling problems.

<span class="mw-page-title-main">Maximum cut</span> Problem of finding a maximum cut in a graph

In a graph, a maximum cut is a cut whose size is at least the size of any other cut. That is, it is a partition of the graph's vertices into two complementary sets S and T, such that the number of edges between S and T is as large as possible. Finding such a cut is known as the max-cut problem.

In the analysis of algorithms, several authors have studied the computation of the volume of high-dimensional convex bodies, a problem that can also be used to model many other problems in combinatorial enumeration. Often these works use a black box model of computation in which the input is given by a subroutine for testing whether a point is inside or outside of the convex body, rather than by an explicit listing of the vertices or faces of a convex polytope. It is known that, in this model, no deterministic algorithm can achieve an accurate approximation, and even for an explicit listing of faces or vertices the problem is #P-hard. However, a joint work by Martin Dyer, Alan M. Frieze and Ravindran Kannan provided a randomized polynomial time approximation scheme for the problem, providing a sharp contrast between the capabilities of randomized and deterministic algorithms.

In mathematics, the minimum k-cut is a combinatorial optimization problem that requires finding a set of edges whose removal would partition the graph to at least k connected components. These edges are referred to as k-cut. The goal is to find the minimum-weight k-cut. This partitioning can have applications in VLSI design, data-mining, finite elements and communication in parallel computing.

In graph theory, the metric k-center problem is a combinatorial optimization problem studied in theoretical computer science. Given n cities with specified distances, one wants to build k warehouses in different cities and minimize the maximum distance of a city to a warehouse. In graph theory, this means finding a set of k vertices for which the largest distance of any point to its closest vertex in the k-set is minimum. The vertices must be in a metric space, providing a complete graph that satisfies the triangle inequality.

In computational complexity theory, the exponential time hypothesis is an unproven computational hardness assumption that was formulated by Impagliazzo & Paturi (1999). It states that satisfiability of 3-CNF Boolean formulas cannot be solved in subexponential time, . More precisely, the usual form of the hypothesis asserts the existence of a number such that all algorithms that correctly solve this problem require time at least . The exponential time hypothesis, if true, would imply that P ≠ NP, but it is a stronger statement. It implies that many computational problems are equivalent in complexity, in the sense that if one of them has a subexponential time algorithm then they all do, and that many known algorithms for these problems have optimal or near-optimal time complexity.

Bidimensionality theory characterizes a broad range of graph problems (bidimensional) that admit efficient approximate, fixed-parameter or kernel solutions in a broad range of graphs. These graph classes include planar graphs, map graphs, bounded-genus graphs and graphs excluding any fixed minor. In particular, bidimensionality theory builds on the graph minor theory of Robertson and Seymour by extending the mathematical results and building new algorithmic tools. The theory was introduced in the work of Demaine, Fomin, Hajiaghayi, and Thilikos, for which the authors received the Nerode Prize in 2015.

The vertex k-center problem is a classical NP-hard problem in computer science. It has application in facility location and clustering. Basically, the vertex k-center problem models the following real problem: given a city with facilities, find the best facilities where to build fire stations. Since firemen must attend any emergency as quickly as possible, the distance from the farthest facility to its nearest fire station has to be as small as possible. In other words, the position of the fire stations must be such that every possible fire is attended as quickly as possible.

The strip packing problem is a 2-dimensional geometric minimization problem. Given a set of axis-aligned rectangles and a strip of bounded width and infinite height, determine an overlapping-free packing of the rectangles into the strip minimizing its height. This problem is a cutting and packing problem and is classified as an Open Dimension Problem according to Wäscher et al.

Identical-machines scheduling is an optimization problem in computer science and operations research. We are given n jobs J1, J2, ..., Jn of varying processing times, which need to be scheduled on m identical machines, such that a certain objective function is optimized, for example, the makespan is minimized.

The highway dimension is a graph parameter modelling transportation networks, such as road networks or public transportation networks. It was first formally defined by Abraham et al. based on the observation by Bast et al. that any road network has a sparse set of "transit nodes", such that driving from a point A to a sufficiently far away point B along the shortest route will always pass through one of these transit nodes. It has also been proposed that the highway dimension captures the properties of public transportation networks well, given that longer routes using busses, trains, or airplanes will typically be serviced by larger transit hubs. This relates to the spoke–hub distribution paradigm in transport topology optimization.

References

  1. Marx, Daniel (2008). "Parameterized Complexity and Approximation Algorithms". The Computer Journal. 51 (1): 60–78. doi:10.1093/comjnl/bxm048.
  2. Feldmann, Andreas Emil; Karthik C. S; Lee, Euiwoong; Manurangsi, Pasin (2020). "A Survey on Approximation in Parameterized Complexity: Hardness and Algorithms". Algorithms. 13 (6): 146. doi: 10.3390/a13060146 . ISSN   1999-4893. Creative Commons by small.svg  This article incorporates textfrom this source, which is available under the CC BY 4.0 license.
  3. Manurangsi, Pasin (2018). "Inapproximability of Maximum Biclique Problems, Minimum k-Cut and Densest At-Least-k-Subgraph from the Small Set Expansion Hypothesis". Algorithms. 11 (1): 10. doi: 10.3390/a11010010 . ISSN   1999-4893.
  4. G. Downey, Rodney; Estivill-Castro, Vladimir; Fellows, Michael; Prieto, Elena; Rosamund, Frances A. (2003-04-01). "Cutting Up Is Hard To Do: The Parameterised Complexity of k-Cut and Related Problems". Electronic Notes in Theoretical Computer Science. CATS'03, Computing: the Australasian Theory Symposium. 78: 209–222. doi: 10.1016/S1571-0661(04)81014-4 . ISSN   1571-0661.
  5. Lokshtanov, Daniel; Saurabh, Saket; Surianarayanan, Vaishali (2022-04-25). "A Parameterized Approximation Scheme for Min $k$-Cut". SIAM Journal on Computing: FOCS20–205. arXiv: 2005.00134 . doi:10.1137/20M1383197. ISSN   0097-5397.
  6. Dreyfus, S. E.; Wagner, R. A. (1971). "The steiner problem in graphs". Networks. 1 (3): 195–207. doi:10.1002/net.3230010302.
  7. Chlebík, Miroslav; Chlebíková, Janka (2008-10-31). "The Steiner tree problem on graphs: Inapproximability results". Theoretical Computer Science. Algorithmic Aspects of Global Computing. 406 (3): 207–214. doi:10.1016/j.tcs.2008.06.046. ISSN   0304-3975.
  8. 1 2 Dvořák, Pavel; Feldmann, Andreas E.; Knop, Dušan; Masařík, Tomáš; Toufar, Tomáš; Veselý, Pavel (2021-01-01). "Parameterized Approximation Schemes for Steiner Trees with Small Number of Steiner Vertices". SIAM Journal on Discrete Mathematics. 35 (1): 546–574. doi:10.1137/18M1209489. ISSN   0895-4801. S2CID   3581913.
  9. Guo, Jiong; Niedermeier, Rolf; Suchý, Ondřej (2011-01-01). "Parameterized Complexity of Arc-Weighted Directed Steiner Problems". SIAM Journal on Discrete Mathematics. 25 (2): 583–599. doi:10.1137/100794560. ISSN   0895-4801.
  10. Halperin, Eran; Krauthgamer, Robert (2003-06-09). "Polylogarithmic inapproximability". Proceedings of the thirty-fifth annual ACM symposium on Theory of computing. STOC '03. New York, NY, USA: Association for Computing Machinery. pp. 585–594. doi:10.1145/780542.780628. ISBN   978-1-58113-674-6. S2CID   8554166.
  11. Chitnis, Rajesh; Hajiaghayi, MohammadTaghi; Kortsarz, Guy (2013). Gutin, Gregory; Szeider, Stefan (eds.). "Fixed-Parameter and Approximation Algorithms: A New Look". Parameterized and Exact Computation. Lecture Notes in Computer Science. Cham: Springer International Publishing. 8246: 110–122. arXiv: 1308.3520 . doi:10.1007/978-3-319-03898-8_11. ISBN   978-3-319-03898-8. S2CID   6796132.
  12. Chitnis, Rajesh; Feldmann, Andreas Emil; Manurangsi, Pasin (2021-04-19). "Parameterized Approximation Algorithms for Bidirected Steiner Network Problems". ACM Transactions on Algorithms. 17 (2): 12:1–12:68. doi:10.1145/3447584. ISSN   1549-6325. S2CID   235372580.
  13. 1 2 Cohen-Addad, Vincent; Gupta, Anupam; Kumar, Amit; Lee, Euiwoong; Li, Jason (2019). Baier, Christel; Chatzigiannakis, Ioannis; Flocchini, Paola; Leonardi, Stefano (eds.). "Tight FPT Approximations for k-Median and k-Means". 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs). Dagstuhl, Germany: Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik. 132: 42:1–42:14. doi:10.4230/LIPIcs.ICALP.2019.42. ISBN   978-3-95977-109-2. S2CID   139103417.
  14. Kolliopoulos, Stavros G.; Rao, Satish (1999), Nešetřil, Jaroslav (ed.), "A Nearly Linear-Time Approximation Scheme for the Euclidean k-median Problem", Algorithms - ESA’ 99, Berlin, Heidelberg: Springer Berlin Heidelberg, vol. 1643, pp. 378–389, doi:10.1007/3-540-48481-7_33, ISBN   978-3-540-66251-8 , retrieved 2023-01-24
  15. Cohen-Addad, Vincent (2018-01-01), "A Fast Approximation Scheme for Low-Dimensional k-Means", Proceedings of the 2018 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), Proceedings, Society for Industrial and Applied Mathematics, pp. 430–440, arXiv: 1708.07381 , doi:10.1137/1.9781611975031.29, ISBN   978-1-61197-503-1, S2CID   30474859 , retrieved 2023-01-24
  16. Feldman, Dan; Monemizadeh, Morteza; Sohler, Christian (2007-06-06). "A PTAS for k-means clustering based on weak coresets". Proceedings of the twenty-third annual symposium on Computational geometry - SCG '07. SCG '07. New York, NY, USA: Association for Computing Machinery. pp. 11–18. doi:10.1145/1247069.1247072. ISBN   978-1-59593-705-6. S2CID   5694112.
  17. Feldman, Dan; Langberg, Michael (2011-06-06). "A unified framework for approximating and clustering data". Proceedings of the forty-third annual ACM symposium on Theory of computing. STOC '11. New York, NY, USA: Association for Computing Machinery. pp. 569–578. doi:10.1145/1993636.1993712. ISBN   978-1-4503-0691-1. S2CID   2677556.
  18. Cohen-Addad, Vincent; Feldmann, Andreas Emil; Saulpic, David (2021-10-31). "Near-linear Time Approximation Schemes for Clustering in Doubling Metrics". Journal of the ACM. 68 (6): 44:1–44:34. arXiv: 1812.08664 . doi:10.1145/3477541. ISSN   0004-5411. S2CID   240476191.
  19. Feldmann, Andreas Emil; Saulpic, David (2021-12-01). "Polynomial time approximation schemes for clustering in low highway dimension graphs". Journal of Computer and System Sciences. 122: 72–93. doi:10.1016/j.jcss.2021.06.002. ISSN   0022-0000.
  20. 1 2 Feldmann, Andreas Emil (2019-03-01). "Fixed-Parameter Approximations for k-Center Problems in Low Highway Dimension Graphs". Algorithmica. 81 (3): 1031–1052. arXiv: 1605.02530 . doi:10.1007/s00453-018-0455-0. ISSN   1432-0541. S2CID   46886829.
  21. Feder, Tomás; Greene, Daniel (1988-01-01). "Optimal algorithms for approximate clustering". Proceedings of the twentieth annual ACM symposium on Theory of computing - STOC '88. STOC '88. New York, NY, USA: Association for Computing Machinery. pp. 434–444. doi:10.1145/62212.62255. ISBN   978-0-89791-264-8. S2CID   658151.
  22. 1 2 Feldmann, Andreas Emil; Marx, Dániel (2020-07-01). "The Parameterized Hardness of the k-Center Problem in Transportation Networks". Algorithmica. 82 (7): 1989–2005. doi:10.1007/s00453-020-00683-w. ISSN   1432-0541. S2CID   3532236.
  23. Becker, Amariah; Klein, Philip N.; Saulpic, David (2018). Azar, Yossi; Bast, Hannah; Herman, Grzegorz (eds.). "Polynomial-Time Approximation Schemes for k-center, k-median, and Capacitated Vehicle Routing in Bounded Highway Dimension". 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs). Dagstuhl, Germany: Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik. 112: 8:1–8:15. doi:10.4230/LIPIcs.ESA.2018.8. ISBN   978-3-95977-081-1.
  24. Feldmann, Andreas Emil; Vu, Tung Anh (2022). Bekos, Michael A.; Kaufmann, Michael (eds.). "Generalized $$k$$-Center: Distinguishing Doubling and Highway Dimension". Graph-Theoretic Concepts in Computer Science. Lecture Notes in Computer Science. Cham: Springer International Publishing. 13453: 215–229. arXiv: 2209.00675 . doi:10.1007/978-3-031-15914-5_16. ISBN   978-3-031-15914-5.
  25. Katsikarelis, Ioannis; Lampis, Michael; Paschos, Vangelis Th. (2019-07-15). "Structural parameters, tight bounds, and approximation for (k,r)-center". Discrete Applied Mathematics. Combinatorial Optimization: between Practice and Theory. 264: 90–117. arXiv: 1704.08868 . doi:10.1016/j.dam.2018.11.002. ISSN   0166-218X.
  26. Dinur, Irit; Manurangsi, Pasin (2018). Karlin, Anna R. (ed.). "ETH-Hardness of Approximating 2-CSPs and Directed Steiner Network". 9th Innovations in Theoretical Computer Science Conference (ITCS 2018). Leibniz International Proceedings in Informatics (LIPIcs). Dagstuhl, Germany: Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik. 94: 36:1–36:20. doi:10.4230/LIPIcs.ITCS.2018.36. ISBN   978-3-95977-060-6. S2CID   4681120.
  27. S., Karthik C.; Laekhanukit, Bundit; Manurangsi, Pasin (2018-06-20). "On the parameterized complexity of approximating dominating set". Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing. STOC 2018. New York, NY, USA: Association for Computing Machinery. pp. 1283–1296. arXiv: 1711.11029 . doi:10.1145/3188745.3188896. ISBN   978-1-4503-5559-9. S2CID   3170316.
  28. 1 2 3 4 Lokshtanov, Daniel; Panolan, Fahad; Ramanujan, M. S.; Saurabh, Saket (2017-06-19). "Lossy kernelization". Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing (PDF). STOC 2017. New York, NY, USA: Association for Computing Machinery. pp. 224–237. doi:10.1145/3055399.3055456. ISBN   978-1-4503-4528-6. S2CID   14599219.
  29. Hermelin, Danny; Kratsch, Stefan; Sołtys, Karolina; Wahlström, Magnus; Wu, Xi (2015-03-01). "A Completeness Theory for Polynomial (Turing) Kernelization". Algorithmica. 71 (3): 702–730. doi:10.1007/s00453-014-9910-8. ISSN   1432-0541. S2CID   253973283.
  30. Fellows, Michael R.; Kulik, Ariel; Rosamond, Frances; Shachnai, Hadas (2018-05-01). "Parameterized approximation via fidelity preserving transformations". Journal of Computer and System Sciences. 93: 30–40. doi:10.1016/j.jcss.2017.11.001. ISSN   0022-0000.