K-server problem

Last updated
Unsolved problem in computer science:

Is there a -competitive algorithm for solving the -server problem in an arbitrary metric space?

Contents

The k-server problem is a problem of theoretical computer science in the category of online algorithms, one of two abstract problems on metric spaces that are central to the theory of competitive analysis (the other being metrical task systems). In this problem, an online algorithm must control the movement of a set of kservers, represented as points in a metric space, and handle requests that are also in the form of points in the space. As each request arrives, the algorithm must determine which server to move to the requested point. The goal of the algorithm is to keep the total distance all servers move small, relative to the total distance the servers could have moved by an optimal adversary who knows in advance the entire sequence of requests.

The problem was first posed by Mark Manasse, Lyle A. McGeoch and Daniel Sleator (1988). [1] The most prominent open question concerning the k-server problem is the so-called k-server conjecture, also posed by Manasse et al. This conjecture states that there is an algorithm for solving the k-server problem in an arbitrary metric space and for any number k of servers that has competitive ratio exactly k. Manasse et al. were able to prove their conjecture when k = 2, and for more general values of k for some metric spaces restricted to have exactly k+1 points. Chrobak and Larmore (1991) proved the conjecture for tree metrics. The special case of metrics in which all distances are equal is called the paging problem because it models the problem of page replacement algorithms in memory caches, and was also already known to have a k-competitive algorithm (Sleator and Tarjan 1985). Fiat et al. (1990) first proved that there exists an algorithm with finite competitive ratio for any constant k and any metric space, and finally Koutsoupias and Papadimitriou (1995) proved that Work Function Algorithm (WFA) has competitive ratio 2k - 1. However, despite the efforts of many other researchers, reducing the competitive ratio to k or providing an improved lower bound remains open as of 2014. The most common believed scenario is that the Work Function Algorithm is k-competitive. To this direction, in 2000 Bartal and Koutsoupias showed that this is true for some special cases (if the metric space is a line, a weighted star or any metric of k+2 points).

The k-server conjecture has also a version for randomized algorithms, which asks if exists a randomized algorithm with competitive ratio O(log k) in any arbitrary metric space (with at least k + 1 points). [2] In 2011, a randomized algorithm with competitive bound Õ(log2k log3n) was found. [3] [4] In 2017, a randomized algorithm with competitive bound O(log6 k) was announced, [5] but was later retracted. [6] In 2022 it was shown that the randomized version of the conjecture is false. [2] [7] [8]

Example

To make the problem more concrete, imagine sending customer support technicians to customers when they have trouble with their equipment. In our example problem there are two technicians, Mary and Noah, serving three customers, in San Francisco, California; Washington, DC; and Baltimore, Maryland. As a k-server problem, the servers are the technicians, so k = 2 and this is a 2-server problem. Washington and Baltimore are 35 miles (56 km) apart, while San Francisco is 3,000 miles (4,800 km) away from both, and initially Mary and Noah are both in San Francisco.

Consider an algorithm for assigning servers to requests that always assigns the closest server to the request, and suppose that each weekday morning the customer in Washington needs assistance while each weekday afternoon the customer in Baltimore needs assistance, and that the customer in San Francisco never needs assistance. Then, our algorithm will assign one of the servers (say Mary) to the Washington area, after which she will always be the closest server and always be assigned to all customer requests. Thus, every day our algorithm incurs the cost of traveling between Washington and Baltimore and back, 70 miles (110 km). After a year of this request pattern, the algorithm will have incurred 20,500 miles (33,000 km) travel: 3,000 to send Mary to the East Coast, and 17,500 for the trips between Washington and Baltimore. On the other hand, an optimal adversary who knows the future request schedule could have sent both Mary and Noah to Washington and Baltimore respectively, paying 6,000 miles (9,700 km) of travel once but then avoiding any future travel costs. The competitive ratio of our algorithm on this input is 20,500/6,000 or approximately 3.4, and by adjusting the parameters of this example the competitive ratio of this algorithm can be made arbitrarily large.

Thus we see that always assigning the closest server can be far from optimal. On the other hand, it seems foolish for an algorithm that does not know future requests to send both of its technicians away from San Francisco, as the next request could be in that city and it would have to send someone back immediately. So it seems that it is difficult or impossible for a k-server algorithm to perform well relative to its adversary. However, for the 2-server problem, there exists an algorithm that always has a total travel distance of at most twice the adversary's distance. The k-server conjecture states that similar solutions exist for problems with any larger number of technicians.

Notes

  1. Manasse, Mark; McGeoch, Lyle; Sleator, Daniel (1988-01-01). "Competitive algorithms for on-line problems". Proceedings of the twentieth annual ACM symposium on Theory of computing - STOC '88. STOC '88. New York, NY, USA: Association for Computing Machinery. pp. 322–333. doi:10.1145/62212.62243. ISBN   978-0-89791-264-8. S2CID   13356897.
  2. 1 2 Bubeck, Sébastien; Coester, Christian; Rabani, Yuval (June 20–23, 2023). The Randomized 𝑘-Server Conjecture Is False!. 55th Annual ACM Symposium on Theory of Computing (STOC '23). Orlando, FL, USA: ACM. p. 14. arXiv: 2211.05753 . doi:10.1145/3564246.3585132.{{cite conference}}: CS1 maint: date and year (link)
  3. Bansal, Nikhil; Buchbinder, Niv; Madry, Aleksander; Naor, Joseph (2015). "A polylogarithmic-competitive algorithm for the k-server problem" (PDF). Journal of the ACM . 62 (5): A40:1–A40:49. arXiv: 1110.1580 . doi:10.1145/2783434. MR   3424197. S2CID   15668961.
  4. "Another Annoying Open Problem". 19 November 2011.
  5. Lee, James R. (2017). "Fusible HSTs and the Randomized k-Server Conjecture". arXiv: 1711.01789 .
  6. "Erratum: Fusible HSTS and the randomized k-server conjecture".
  7. Goldberg, Madison (2023-11-20). "Researchers Refute a Widespread Belief About Online Algorithms". Quanta Magazine. Retrieved 2023-11-26.
  8. The video presentation of the paper "The Randomized k-Server Conjecture is False!" at STOC 2023 is available in YouTube.

Related Research Articles

Daniel Dominic Kaplan Sleator is a Professor of Computer Science at Carnegie Mellon University, Pittsburgh, United States. In 1999, he won the ACM Paris Kanellakis Award for the splay tree data structure.

<span class="mw-page-title-main">Steiner tree problem</span> On short connecting networks with added vertices

In combinatorial mathematics, the Steiner tree problem, or minimum Steiner tree problem, named after Jakob Steiner, is an umbrella term for a class of problems in combinatorial optimization. While Steiner tree problems may be formulated in a number of settings, they all require an optimal interconnect for a given set of objects and a predefined objective function. One well-known variant, which is often used synonymously with the term Steiner tree problem, is the Steiner tree problem in graphs. Given an undirected graph with non-negative edge weights and a subset of vertices, usually referred to as terminals, the Steiner tree problem in graphs requires a tree of minimum weight that contains all terminals and minimizes the total weight of its edges. Further well-known variants are the Euclidean Steiner tree problem and the rectilinear minimum Steiner tree problem.

In computer science and operations research, approximation algorithms are efficient algorithms that find approximate solutions to optimization problems with provable guarantees on the distance of the returned solution to the optimal one. Approximation algorithms naturally arise in the field of theoretical computer science as a consequence of the widely believed P ≠ NP conjecture. Under this conjecture, a wide class of optimization problems cannot be solved exactly in polynomial time. The field of approximation algorithms, therefore, tries to understand how closely it is possible to approximate optimal solutions to such problems in polynomial time. In an overwhelming majority of the cases, the guarantee of such algorithms is a multiplicative one expressed as an approximation ratio or approximation factor i.e., the optimal solution is always guaranteed to be within a (predetermined) multiplicative factor of the returned solution. However, there are also many approximation algorithms that provide an additive guarantee on the quality of the returned solution. A notable example of an approximation algorithm that provides both is the classic approximation algorithm of Lenstra, Shmoys and Tardos for scheduling on unrelated parallel machines.

<span class="mw-page-title-main">Set cover problem</span> Classical problem in combinatorics

The set cover problem is a classical question in combinatorics, computer science, operations research, and complexity theory.

In computational complexity theory, the unique games conjecture is a conjecture made by Subhash Khot in 2002. The conjecture postulates that the problem of determining the approximate value of a certain type of game, known as a unique game, has NP-hard computational complexity. It has broad applications in the theory of hardness of approximation. If the unique games conjecture is true and P ≠ NP, then for many important problems it is not only impossible to get an exact solution in polynomial time, but also impossible to get a good polynomial-time approximation. The problems for which such an inapproximability result would hold include constraint satisfaction problems, which crop up in a wide variety of disciplines.

The Christofides algorithm or Christofides–Serdyukov algorithm is an algorithm for finding approximate solutions to the travelling salesman problem, on instances where the distances form a metric space . It is an approximation algorithm that guarantees that its solutions will be within a factor of 3/2 of the optimal solution length, and is named after Nicos Christofides and Anatoliy I. Serdyukov, who discovered it independently in 1976.

In computational complexity theory, a computational hardness assumption is the hypothesis that a particular problem cannot be solved efficiently. It is not known how to prove (unconditional) hardness for essentially any useful problem. Instead, computer scientists rely on reductions to formally relate the hardness of a new or complicated problem to a computational hardness assumption about a problem that is better-understood.

Nearest neighbor search (NNS), as a form of proximity search, is the optimization problem of finding the point in a given set that is closest to a given point. Closeness is typically expressed in terms of a dissimilarity function: the less similar the objects, the larger the function values.

Competitive analysis is a method invented for analyzing online algorithms, in which the performance of an online algorithm is compared to the performance of an optimal offline algorithm that can view the sequence of requests in advance. An algorithm is competitive if its competitive ratio—the ratio between its performance and the offline algorithm's performance—is bounded. Unlike traditional worst-case analysis, where the performance of an algorithm is measured only for "hard" inputs, competitive analysis requires that an algorithm perform well both on hard and easy inputs, where "hard" and "easy" are defined by the performance of the optimal offline algorithm.

Task systems are mathematical objects used to model the set of possible configuration of online algorithms. They were introduced by Borodin, Linial and Saks (1992) to model a variety of online problems. A task system determines a set of states and costs to change states. Task systems obtain as input a sequence of requests such that each request assigns processing times to the states. The objective of an online algorithm for task systems is to create a schedule that minimizes the overall cost incurred due to processing the tasks with respect to the states and due to the cost to change states.

Byzantine fault tolerant protocols are algorithms that are robust to arbitrary types of failures in distributed algorithms. The Byzantine agreement protocol is an essential part of this task. The constant-time quantum version of the Byzantine protocol, is described below.

Algorithmic game theory (AGT) is an area in the intersection of game theory and computer science, with the objective of understanding and design of algorithms in strategic environments.

In computer science, the ski rental problem is a name given to a class of problems in which there is a choice between continuing to pay a repeating cost or paying a one-time cost which eliminates or reduces the repeating cost.

Michael Ezra Saks is an American mathematician. He is currently the Department Chair of the Mathematics Department at Rutgers University (2017–) and from 2006 until 2010 was director of the Mathematics Graduate Program at Rutgers University. Saks received his Ph.D. from the Massachusetts Institute of Technology in 1980 after completing his dissertation titled Duality Properties of Finite Set Systems under his advisor Daniel J. Kleitman.

The List Update or the List Access problem is a simple model used in the study of competitive analysis of online algorithms. Given a set of items in a list where the cost of accessing an item is proportional to its distance from the head of the list, e.g. a Linked List, and a request sequence of accesses, the problem is to come up with a strategy of reordering the list so that the total cost of accesses is minimized. The reordering can be done at any time but incurs a cost. The standard model includes two reordering actions:

Amos Fiat is an Israeli computer scientist, a professor of computer science at Tel Aviv University. He is known for his work in cryptography, online algorithms, and algorithmic game theory.

<span class="mw-page-title-main">Anna Karlin</span> American computer scientist

Anna R. Karlin is an American computer scientist, the Microsoft Professor of Computer Science & Engineering at the University of Washington.

<span class="mw-page-title-main">Delone set</span> Well-spaced set of points in a metric space

In the mathematical theory of metric spaces, ε-nets, ε-packings, ε-coverings, uniformly discrete sets, relatively dense sets, and Delone sets are several closely related definitions of well-spaced sets of points, and the packing radius and covering radius of these sets measure how well-spaced they are. These sets have applications in coding theory, approximation algorithms, and the theory of quasicrystals.

A Prior-independent mechanism (PIM) is a mechanism in which the designer knows that the agents' valuations are drawn from some probability distribution, but does not know the distribution.

In theoretical computer science and metric geometry, the GNRS conjecture connects the theory of graph minors, the stretch factor of embeddings, and the approximation ratio of multi-commodity flow problems. It is named after Anupam Gupta, Ilan Newman, Yuri Rabinovich, and Alistair Sinclair, who formulated it in 2004.

References