Random-sampling mechanism

Last updated

A random-sampling mechanism (RSM) is a truthful mechanism that uses sampling in order to achieve approximately-optimal gain in prior-free mechanisms and prior-independent mechanisms.

Contents

Suppose we want to sell some items in an auction and achieve maximum profit. The crucial difficulty is that we do not know how much each buyer is willing to pay for an item. If we know, at least, that the valuations of the buyers are random variables with some known probability distribution, then we can use a Bayesian-optimal mechanism. But often we do not know the distribution. In this case, random-sampling mechanisms provide an alternative solution.

RSM in large markets

Market-halving scheme

When the market is large, the following general scheme can be used: [1] :341–344

  1. The buyers are asked to reveal their valuations.
  2. The buyers are split to two sub-markets, ("left") and ("right"), using simple random sampling: each buyer goes to one of the sides by tossing a fair coin.
  3. In each sub-market , an empirical distribution function is calculated.
  4. The Bayesian-optimal mechanism (Myerson's mechanism) is applied in sub-market with distribution , and in with .

This scheme is called "Random-Sampling Empirical Myerson" (RSEM).

The declaration of each buyer has no effect on the price he has to pay; the price is determined by the buyers in the other sub-market. Hence, it is a dominant strategy for the buyers to reveal their true valuation. In other words, this is a truthful mechanism.

Intuitively, by the law of large numbers, if the market is sufficiently large then the empirical distributions are sufficiently similar to the real distributions, so we expect the RSEM to attain near-optimal profit. However, this is not necessarily true in all cases. It has been proved to be true in some special cases.

The simplest case is digital goods auction. There, step 4 is simple and consists only of calculating the optimal price in each sub-market. The optimal price in is applied to and vice versa. Hence, the mechanism is called "Random-Sampling Optimal Price" (RSOP). This case is simple because it always calculates feasible allocations. I.e, it is always possible to apply the price calculated in one side to the other side. This is not necessarily the case with physical goods.

Even in a digital goods auction, RSOP does not necessarily converge to the optimal profit. It converges only under the bounded valuations assumption: for each buyer, the valuation of the item is between 1 and , where is some constant. The convergence rate of RSOP to optimality depends on . The convergence rate also depends on the number of possible "offers" considered by the mechanism. [2]

To understand what an "offer" is, consider a digital goods auction in which the valuations of the buyers, in dollars, are known to be bounded in . If the mechanism uses only whole dollar prices, then there are only possible offers.

In general, the optimization problem may involve much more than just a single price. For example, we may want to sell several different digital goods, each of which may have a different price. So instead of a "price", we talk on an "offer". We assume that there is a global set of possible offers. For every offer and agent , is the amount that agent pays when presented with the offer . In the digital-goods example, is the set of possible prices. For every possible price , there is a function such that is either 0 (if ) or (if ).

For every set of agents, the profit of the mechanism from presenting the offer to the agents in is:

and the optimal profit of the mechanism is:

The RSM calculates, for each sub-market , an optimal offer , calculated as follows:

The offer is applied to the buyers in , i.e.: each buyer who said that receives the offered allocation and pays ; each buyer in who said that do not receive and do not pay anything. The offer is applied to the buyers in in a similar way.

Profit-oracle scheme

Profit oracle is another RSM scheme that can be used in large markets. [3] It is useful when we do not have direct access to agents' valuations (e.g. due to privacy reasons). All we can do is run an auction and watch its expected profit. In a single-item auction, where there are bidders, and for each bidder there are at most possible values (selected at random with unknown probabilities), the maximum-revenue auction can be learned using:

calls to the oracle-profit.

RSM in small markets

RSMs were also studied in a worst-case scenario in which the market is small. In such cases, we want to get an absolute, multiplicative approximation factor, that does not depend on the size of the market.

Market-halving, digital goods

The first research in this setting was for a digital goods auction with Single-parameter utility. [4]

For the Random-Sampling Optimal-Price mechanism, several increasingly better approximations have been calculated:

Single-sample, physical goods

When the agents' valuations satisfy some technical regularity condition (called monotone hazard rate), it is possible to attain a constant-factor approximation to the maximum-profit auction using the following mechanism: [8]

The profit of this mechanism is at least , where is the number of agents. This is 1/8 when there are two agents, and grows towards 1/4 as the number of agents grows. This scheme can be generalized to handle constraints on the subsets of agents that can win simultaneously (e.g., there is only a finite number of items). It can also handle agents with different attributes (e.g. young vs. old bidders).

Sample complexity

The sample complexity of a random-sampling mechanism is the number of agents it needs to sample in order to attain a reasonable approximation of the optimal welfare.

The results in [8] imply several bounds on the sample-complexity of revenue-maximization of single-item auctions: [9]

The situation becomes more complicated when the agents are not i.i.d (each agent's value is drawn from a different regular distribution) and the goods have limited supply. When the agents come from different distributions, the sample complexity of -approximation of the optimal expected revenue in single-item auctions is: [9]

[11] discuss arbitrary auctions with single-parameter utility agents (not only single-item auctions), and arbitrary auction-mechanisms (not only specific auctions). Based on known results about sample complexity, they show that the number of samples required to approximate the maximum-revenue auction from a given class of auctions is:

where:

In particular, they consider a class of simple auctions called -level auctions: auctions with reserve prices (a Vickrey auction with a single reserve price is a 1-level auction). They prove that the pseudo-VC-dimension of this class is , which immediately translates to a bound on their generalization error and sample-complexity. They also prove bounds on the representation error of this class of auctions.

Envy

A disadvantage of the random-sampling mechanism is that it is not envy-free. E.g., if the optimal prices in the two sub-markets and are different, then buyers in each sub-market are offered a different price. In other words, there is price discrimination. This is inevitable in the following sense: there is no single-price strategyproof auction that approximates the optimal profit. [12]

See also

Related Research Articles

In game theory, an asymmetric game where players have private information is said to be strategy-proof or strategyproof (SP) if it is a weakly-dominant strategy for every player to reveal his/her private information, i.e. you fare best or at least not worse by being truthful, regardless of what the others do.

A Vickrey auction is a type of sealed-bid auction. Bidders submit written bids without knowing the bid of the other people in the auction. The highest bidder wins but the price paid is the second-highest bid. This type of auction is strategically similar to an English auction and gives bidders an incentive to bid their true value. The auction was first described academically by Columbia University professor William Vickrey in 1961 though it had been used by stamp collectors since 1893. In 1797 Johann Wolfgang von Goethe sold a manuscript using a sealed-bid, second-price auction.

A double auction is a process of buying and selling goods with multiple sellers and multiple buyers. Potential buyers submit their bids and potential sellers submit their ask prices to the market institution, and then the market institution chooses some price p that clears the market: all the sellers who asked less than p sell and all buyers who bid more than p buy at this price p. Buyers and sellers that bid or ask for exactly p are also included. A common example of a double auction is stock exchange.

Competitive equilibrium is a concept of economic equilibrium introduced by Kenneth Arrow and Gérard Debreu in 1951 appropriate for the analysis of commodity markets with flexible prices and many traders, and serving as the benchmark of efficiency in economic analysis. It relies crucially on the assumption of a competitive environment where each trader decides upon a quantity that is so small compared to the total quantity traded in the market that their individual transactions have no influence on the prices. Competitive markets are an ideal standard by which other market structures are evaluated.

Revenue equivalence is a concept in auction theory that states that given certain conditions, any mechanism that results in the same outcomes also has the same expected revenue.

Sauer–Shelah lemma Notion in combinatorics

In combinatorial mathematics and extremal set theory, the Sauer–Shelah lemma states that every family of sets with small VC dimension consists of a small number of sets. It is named after Norbert Sauer and Saharon Shelah, who published it independently of each other in 1972. The same result was also published slightly earlier and again independently, by Vladimir Vapnik and Alexey Chervonenkis, after whom the VC dimension is named. In his paper containing the lemma, Shelah gives credit also to Micha Perles, and for this reason the lemma has also been called the Perles–Sauer–Shelah lemma.

The sample complexity of a machine learning algorithm represents the number of training-samples that it needs in order to successfully learn a target function.

Truthful job scheduling is a mechanism design variant of the job shop scheduling problem from operations research.

In auction theory, a digital goods auction is an auction in which a seller has an unlimited supply of a certain item.

A Bayesian-optimal mechanism (BOM) is a mechanism in which the designer does not know the valuations of the agents for whom the mechanism is designed, but he knows that they are random variables and he knows the probability distribution of these variables.

Consensus estimate is a technique for designing truthful mechanisms in a prior-free mechanism design setting. The technique was introduced for digital goods auctions and later extended to more general settings.

A prior-free mechanism (PFM) is a mechanism in which the designer does not have any information on the agents' valuations, not even that they are random variables from some unknown probability distribution.

The competitive facility location game is a kind of competitive game in which service-providers select locations to place their facilities in order to maximize their profits. The game has the following components:

In mechanism design and auction theory, a profit extraction mechanism is a truthful mechanism whose goal is to win a pre-specified amount of profit, if it is possible.

A Prior-independent mechanism (PIM) is a mechanism in which the designer knows that the agents' valuations are drawn from some probability distribution, but does not know the distribution.

In auction theory, particularly Bayesian-optimal mechanism design, a virtual valuation of an agent is a function that measures the surplus that can be extracted from that agent.

Bayesian-optimal pricing is a kind of algorithmic pricing in which a seller determines the sell-prices based on probabilistic assumptions on the valuations of the buyers. It is a simple kind of a Bayesian-optimal mechanism, in which the price is determined in advance without collecting actual buyers' bids.

The Price of Anarchy (PoA) is a concept in game theory and mechanism design that measures how the social welfare of a system degrades due to selfish behavior of its agents. It has been studied extensively in various contexts, particularly in auctions.

Regularity, sometimes called Myerson's regularity, is a property of probability distributions used in auction theory and revenue management. Examples of distributions that satisfy this condition include Gaussian, uniform, and exponential; some power law distributions also satisfy regularity. Distributions that satisfy the regularity condition are often referred to as "regular distributions".

Adding controlled noise from predetermined distributions is a way of designing differentially private mechanisms. This technique is useful for designing private mechanisms for real-valued functions on sensitive data. Some commonly used distributions for adding noise include Laplace and Gaussian distributions.

References

  1. Vazirani, Vijay V.; Nisan, Noam; Roughgarden, Tim; Tardos, Éva (2007). Algorithmic Game Theory (PDF). Cambridge, UK: Cambridge University Press. ISBN   0-521-87282-0.
  2. Balcan, Maria-Florina; Blum, Avrim; Hartline, Jason D.; Mansour, Yishay (2008). "Reducing mechanism design to algorithm design via machine learning". Journal of Computer and System Sciences. 74 (8): 1245. doi:10.1016/j.jcss.2007.08.002.
  3. Edith Elkind (2007). Designing And Learning Optimal Finite Support Auctions. SODA.
  4. Goldberg, Andrew V.; Hartline, Jason D. (2001). "Competitive Auctions for Multiple Digital Goods". Algorithms — ESA 2001. Lecture Notes in Computer Science. 2161. p. 416. CiteSeerX   10.1.1.8.5115 . doi:10.1007/3-540-44676-1_35. ISBN   978-3-540-42493-2.
  5. Goldberg, Andrew V.; Hartline, Jason D.; Karlin, Anna R.; Saks, Michael; Wright, Andrew (2006). "Competitive auctions". Games and Economic Behavior. 55 (2): 242. doi:10.1016/j.geb.2006.02.003.
  6. Feige, Uriel; Flaxman, Abraham; Hartline, Jason D.; Kleinberg, Robert (2005). "On the Competitive Ratio of the Random Sampling Auction". Internet and Network Economics. Lecture Notes in Computer Science. 3828. p. 878. CiteSeerX   10.1.1.136.2094 . doi:10.1007/11600930_89. ISBN   978-3-540-30900-0.
  7. Alaei, Saeed; Malekian, Azarakhsh; Srinivasan, Aravind (2009). "On random sampling auctions for digital goods". Proceedings of the tenth ACM conference on Electronic commerce - EC '09. p. 187. CiteSeerX   10.1.1.758.3195 . doi:10.1145/1566374.1566402. ISBN   9781605584584.
  8. 1 2 Dhangwatnotai, Peerapong; Roughgarden, Tim; Yan, Qiqi (2015). "Revenue maximization with a single sample". Games and Economic Behavior. 91: 318–333. doi: 10.1016/j.geb.2014.03.011 .
  9. 1 2 Cole, Richard; Roughgarden, Tim (2014). "The sample complexity of revenue maximization". Proceedings of the 46th Annual ACM Symposium on Theory of Computing - STOC '14. p. 243. arXiv: 1502.00963 . doi:10.1145/2591796.2591867. ISBN   9781450327107.
  10. Hartline, Jason D.; Roughgarden, Tim (2009). "Simple versus optimal mechanisms". Proceedings of the tenth ACM conference on Electronic commerce - EC '09. p. 225. doi:10.1145/1566374.1566407. ISBN   9781605584584.
  11. On the Pseudo-Dimension of Nearly Optimal Auctions. NIPS. 2015. arXiv: 1506.03684 . Bibcode:2015arXiv150603684M.
  12. Andrew V. Goldberg and Jason D. Hartline (2003). "Competitiveness via consensus". Proceedings of the fourteenth annual ACM-SIAM symposium on Discrete algorithms. SODA '03. Retrieved 7 January 2016.