Ski rental problem

Last updated

In computer science, the ski rental problem is a name given to a class of problems in which there is a choice between continuing to pay a repeating cost or paying a one-time cost which eliminates or reduces the repeating cost. [1]

Contents

The problem

Many online problems have a sub-problem called the rent-or-buy problem. Given an expensive up front cost, or a less expensive repeating cost, with no knowledge of how the future will play out, at what point is it better to pay the up front cost to avoid a continued repeating cost?

Consider a person who decides to go skiing, but for an undecided number of days. Renting skis costs $1 per day, whereas buying a pair of skis costs $10. If the person knows in advance how many days they want to ski, then the breakeven point is 10 days. Fewer than 10 days, renting is preferable, whereas with more than 10 days, buying is preferable. However, with no advance knowledge of how long one will be skiing, the breakeven point is unclear. A good algorithm will minimize the ratio of the cost when the number of days is known in advance to the cost when the number of days is not known in advance. [2] Ski rental [3] [4] is one example of this class of problem.

The break-even algorithm

The break-even algorithm instructs one to rent for 9 days and buy skis on the morning of day 10 if one is still up for skiing. [5] If one has to stop skiing during the first 9 days, it costs the same as what one would pay if one had known the number of days one would go skiing. If one has to stop skiing after day 10, one's cost is $19 which is 90% more than what one would pay if one had known the number of days one would go skiing in advance. This is the worst case for the break-even algorithm.

The break-even algorithm is known to be the best deterministic algorithm for this problem.

The randomized algorithm

A person can flip a coin. If it comes up heads, she buys skis on day eight; otherwise, she buys skis on day 10. This is an instance of a randomized algorithm. The expected cost is at most 80% more than what the person would pay if she had known the number of days she would go skiing, regardless of how many days she skis. In particular, if the person skis for 10 days, her expected cost is 1/2 [7 +10] + 1/2 [9+10] = 18 dollars, only 80% excess instead of 90%.

A randomized algorithm can be understood as a composition of different algorithms, each one which occurs with a given probability. We define the expected competitive ratio on a given instance i as:

, where is the competitive ratio for instance i, given .

Consequently, the competitive ratio of a randomized algorithm is given by the worst value of over all given instances. In the case of the coin flipping ski-rental, we note that the randomized algorithm has 2 possible branches: If the coin comes up heads, we buy on day 8, otherwise we buy on day 10. We may call the branches and , respectively. , for .

,

, and

, for .

Therefore, the competitive ratio of the randomized ski-rental coin flipping algorithm is 1.8.

The best randomized algorithm against an oblivious adversary is to choose some day i at random according to the following distribution p, rent for i-1 days and buy skis on the morning of day i if one are still up for skiing. Karlin et al. [3] first presented this algorithm with distribution where buying skis costs $ and renting costs $1. Its expected cost is at most e/(e-1) 1.58 times what one would pay if one had known the number of days one would go skiing. No randomized algorithm can do better.

Applications

See also

Related Research Articles

The bin packing problem is an optimization problem, in which items of different sizes must be packed into a finite number of bins or containers, each of a fixed given capacity, in a way that minimizes the number of bins used. The problem has many applications, such as filling up containers, loading trucks with weight capacity constraints, creating file backups in media, and technology mapping in FPGA semiconductor chip design.

In numerical analysis and computational statistics, rejection sampling is a basic technique used to generate observations from a distribution. It is also commonly called the acceptance-rejection method or "accept-reject algorithm" and is a type of exact simulation method. The method works for any distribution in with a density.

In a computer operating system that uses paging for virtual memory management, page replacement algorithms decide which memory pages to page out, sometimes called swap out, or write to disk, when a page of memory needs to be allocated. Page replacement happens when a requested page is not in memory and a free page cannot be used to satisfy the allocation, either because there are none, or because the number of free pages is lower than some threshold.

In computing, external memory algorithms or out-of-core algorithms are algorithms that are designed to process data that are too large to fit into a computer's main memory at once. Such algorithms must be optimized to efficiently fetch and access data stored in slow bulk memory such as hard drives or tape drives, or when memory is on a computer network. External memory algorithms are analyzed in the external memory model.

<span class="mw-page-title-main">Secretary problem</span> Mathematical problem involving optimal stopping theory

The secretary problem demonstrates a scenario involving optimal stopping theory that is studied extensively in the fields of applied probability, statistics, and decision theory. It is also known as the marriage problem, the sultan's dowry problem, the fussy suitor problem, the googol game, and the best choice problem. Its solution is also known as the 37% rule.

In mathematics, the Schwartz–Zippel lemma is a tool commonly used in probabilistic polynomial identity testing, i.e. in the problem of determining whether a given multivariate polynomial is the 0-polynomial. It was discovered independently by Jack Schwartz, Richard Zippel, and Richard DeMillo and Richard J. Lipton, although DeMillo and Lipton's version was shown a year prior to Schwartz and Zippel's result. The finite field version of this bound was proved by Øystein Ore in 1922.

TCP-Illinois is a variant of TCP congestion control protocol, developed at the University of Illinois at Urbana–Champaign. It is especially targeted at high-speed, long-distance networks. A sender side modification to the standard TCP congestion control algorithm, it achieves a higher average throughput than the standard TCP, allocates the network resource fairly as the standard TCP, is compatible with the standard TCP, and provides incentives for TCP users to switch.

<span class="mw-page-title-main">Karger's algorithm</span> Randomized algorithm for minimum cuts

In computer science and graph theory, Karger's algorithm is a randomized algorithm to compute a minimum cut of a connected graph. It was invented by David Karger and first published in 1993.

<span class="mw-page-title-main">Coupon collector's problem</span> Problem in probability theory

In probability theory, the coupon collector's problem refers to mathematical analysis of "collect all coupons and win" contests. It asks the following question: If each box of a brand of cereals contains a coupon, and there are n different types of coupons, what is the probability that more than t boxes need to be bought to collect all n coupons? An alternative statement is: Given n coupons, how many coupons do you expect you need to draw with replacement before having drawn each coupon at least once? The mathematical analysis of the problem reveals that the expected number of trials needed grows as . For example, when n = 50 it takes about 225 trials on average to collect all 50 coupons.

The Price of Anarchy (PoA) is a concept in economics and game theory that measures how the efficiency of a system degrades due to selfish behavior of its agents. It is a general notion that can be extended to diverse systems and notions of efficiency. For example, consider the system of transportation of a city and many agents trying to go from some initial location to a destination. Here, efficiency means the average time for an agent to reach the destination. In the 'centralized' solution, a central authority can tell each agent which path to take in order to minimize the average travel time. In the 'decentralized' version, each agent chooses its own path. The Price of Anarchy measures the ratio between average travel time in the two cases.

Samplesort is a sorting algorithm that is a divide and conquer algorithm often used in parallel processing systems. Conventional divide and conquer sorting algorithms partitions the array into sub-intervals or buckets. The buckets are then sorted individually and then concatenated together. However, if the array is non-uniformly distributed, the performance of these sorting algorithms can be significantly throttled. Samplesort addresses this issue by selecting a sample of size s from the n-element sequence, and determining the range of the buckets by sorting the sample and choosing p−1 < s elements from the result. These elements then divide the array into p approximately equal-sized buckets. Samplesort is described in the 1970 paper, "Samplesort: A Sampling Approach to Minimal Storage Tree Sorting", by W. D. Frazer and A. C. McKellar.

Within theoretical computer science, the Sun–Ni law is a memory-bounded speedup model which states that as computing power increases the corresponding increase in problem size is constrained by the system’s memory capacity. In general, as a system grows in computational power, the problems run on the system increase in size. Analogous to Amdahl's law, which says that the problem size remains constant as system sizes grow, and Gustafson's law, which proposes that the problem size should scale but be bound by a fixed amount of time, the Sun–Ni law states the problem size should scale but be bound by the memory capacity of the system. Sun–Ni law was initially proposed by Xian-He Sun and Lionel Ni at the Proceedings of IEEE Supercomputing Conference 1990.

In statistics, a simple random sample is a subset of individuals chosen from a larger set in which a subset of individuals are chosen randomly, all with the same probability. It is a process of selecting a sample in a random way. In SRS, each subset of k individuals has the same probability of being chosen for the sample as any other subset of k individuals. Simple random sampling is a basic type of sampling and can be a component of other more complex sampling methods.

The maximum coverage problem is a classical question in computer science, computational complexity theory, and operations research. It is a problem that is widely taught in approximation algorithms.

In computer science and operations research, randomized rounding is a widely used approach for designing and analyzing approximation algorithms.

The List Update or the List Access problem is a simple model used in the study of competitive analysis of online algorithms. Given a set of items in a list where the cost of accessing an item is proportional to its distance from the head of the list, e.g. a Linked List, and a request sequence of accesses, the problem is to come up with a strategy of reordering the list so that the total cost of accesses is minimized. The reordering can be done at any time but incurs a cost. The standard model includes two reordering actions:

In PAC learning, error tolerance refers to the ability of an algorithm to learn when the examples received have been corrupted in some way. In fact, this is a very common and important issue since in many applications it is not possible to access noise-free data. Noise can interfere with the learning process at different levels: the algorithm may receive data that have been occasionally mislabeled, or the inputs may have some false information, or the classification of the examples may have been maliciously adulterated.

In mechanism design and auction theory, a profit extraction mechanism is a truthful mechanism whose goal is to win a pre-specified amount of profit, if it is possible.

<span class="mw-page-title-main">Parallel external memory</span>

In computer science, a parallel external memory (PEM) model is a cache-aware, external-memory abstract machine. It is the parallel-computing analogy to the single-processor external memory (EM) model. In a similar way, it is the cache-aware analogy to the parallel random-access machine (PRAM). The PEM model consists of a number of processors, together with their respective private caches and a shared main memory.

The configuration linear program (configuration-LP) is a linear programming technique used for solving combinatorial optimization problems. It was introduced in the context of the cutting stock problem. Later, it has been applied to the bin packing and job scheduling problems. In the configuration-LP, there is a variable for each possible configuration - each possible multiset of items that can fit in a single bin. Usually, the number of configurations is exponential in the problem size, but in some cases it is possible to attain approximate solutions using only a polynomial number of configurations.

References

  1. Blum, Avrim. "cos 521: Advanced Algorithm Design Lecture 24: Online Algorithms" (PDF). Computer Science Princeton University. Princeton University. Retrieved 2022-07-16.
  2. 1 2 Steven S. Seiden. A guessing game and randomized online algorithms. Annual ACM Symposium on Theory of Computing, 2000. http://portal.acm.org/citation.cfm?id=335385
  3. 1 2 3 A. R. Karlin, M. S. Manasse, L. A. McGeoch, and S. Owicki. Competitive randomized algorithms for non-uniform problems. In Proceedings of the First Annual ACM-SIAM Symposium on Discrete Algorithms, San Francisco, CA, 22–24 January 1990, pp. 301-309. Also in Algorithmica, 11(6): 542-571, 1994. http://courses.csail.mit.edu/6.895/fall03/handouts/papers/karlin.pdf
  4. Claire Mathieu. Brown University. Lecture note: http://www.cs.brown.edu/~claire/Talks/skirental.pdf
  5. A. R. Karlin, M. Manasse, L. Rudolph and D. Sleator. Competitive snoopy caching. Algorithmica, 3(1): 79-119, 1988
  6. Dooly, Daniel R.; Goldman, Sally A.; Scott, Stephen D. (1998). "TCP Dynamic Acknowledgment Delay: Theory and Practice (Extended Abstract)". In Vitter, Jeffrey Scott (ed.). Proceedings of the Thirtieth Annual ACM Symposium on the Theory of Computing, Dallas, Texas, USA, May 23-26, 1998. {ACM}. pp. 389–398. doi: 10.1145/276698.276792 .
  7. Anna R. Karlin and Claire Kenyon and Dana Randall. Dynamic TCP acknowledgement and other stories about e/(e-1). Thirty-Third Annual ACM Symposium on Theory of Computing (STOC), 2001. Algorithmica. http://www.cs.brown.edu/people/claire/Publis/ACKpaper.ps