Multiparty communication complexity

Last updated

In theoretical computer science, multiparty communication complexity is the study of communication complexity in the setting where there are more than 2 players.

In the traditional two–party communication game, introduced by Yao (1979), [1] two players, P1 and P2 attempt to compute a Boolean function

Player P1 knows the value of x2, P2 knows the value of x1, but Pi does not know the value of xi, for i = 1, 2.

In other words, the players know the other's variables, but not their own. The minimum number of bits that must be communicated by the players to compute f is the communication complexity of f, denoted by κ(f).

The multiparty communication game, defined in 1983, [2] is a powerful generalization of the 2–party case: Here the players know all the others' input, except their own. Because of this property, sometimes this model is called "numbers on the forehead" model, since if the players were seated around a round table, each wearing their own input on the forehead, then every player would see all the others' input, except their own.

The formal definition is as follows: players: intend to compute a Boolean function

On set of variables there is a fixed partition of classes , and player knows every variable, except those in , for . The players have unlimited computational power, and they communicate with the help of a blackboard, viewed by all players.

The aim is to compute ), such that at the end of the computation, every player knows this value. The cost of the computation is the number of bits written onto the blackboard for the given input and partition . The cost of a multiparty protocol is the maximum number of bits communicated for any from the set {0,1}n and the given partition . The -party communication complexity, of a function , with respect to partition , is the minimum of costs of those -party protocols which compute . The -party symmetric communication complexity of is defined as

where the maximum is taken over all k-partitions of set .

Upper and lower bounds

For a general upper bound both for two and more players, let us suppose that A1 is one of the smallest classes of the partition A1,A2,...,Ak. Then P1 can compute any Boolean function of S with |A1| + 1 bits of communication: P2 writes down the |A1| bits of A1 on the blackboard, P1 reads it, and computes and announces the value . So, the following can be written:

The Generalized Inner Product function (GIP) [3] is defined as follows: Let be -bit vectors, and let be the times matrix, with columns as the vectors. Then is the number of the all-1 rows of matrix , taken modulo 2. In other words, if the vectors correspond to the characteristic vectors of subsets of an element base-set, then GIP corresponds to the parity of the intersection of these subsets.

It was shown [3] that

with a constant c > 0.

An upper bound on the multiparty communication complexity of GIP shows [4] that

with a constant c > 0.

For a general Boolean function f, one can bound the multiparty communication complexity of f by using its L1 norm [5] as follows: [6]

Multiparty communication complexity and pseudorandom generators

A construction of a pseudorandom number generator was based on the BNS lower bound for the GIP function. [3]

  1. Yao, Andrew Chi-Chih (1979), "Some complexity questions related to distributive computing", Proceedings of the 11th ACM Symposium on Theory of Computing (STOC '79), pp. 209–213, doi: 10.1145/800135.804414 , S2CID   999287 .
  2. Chandra, Ashok K.; Furst, Merrick L.; Lipton, Richard J. (1983), "Multi-party protocols", Proceedings of the 15th ACM Symposium on Theory of Computing (STOC '83), pp. 94–99, doi:10.1145/800061.808737, ISBN   978-0897910996, S2CID   18180950 .
  3. 1 2 3 Babai, László; Nisan, Noam; Szegedy, Márió (1992), "Multiparty protocols, pseudorandom generators for logspace, and time-space trade-offs", Journal of Computer and System Sciences, 45 (2): 204–232, doi:10.1016/0022-0000(92)90047-M, MR   1186884 .
  4. Grolmusz, Vince (1994), "The BNS lower bound for multi-party protocols is nearly optimal", Information and Computation, 112 (1): 51–54, doi: 10.1006/inco.1994.1051 , MR   1277711 .
  5. Bruck, Jehoshua; Smolensky, Roman (1992), "Polynomial threshold functions, AC0 functions, and spectral norms" (PDF), SIAM Journal on Computing, 21 (1): 33–42, doi:10.1137/0221003, MR   1148813 .
  6. Grolmusz, V. (1999), "Harmonic analysis, real approximation, and the communication complexity of Boolean functions", Algorithmica, 23 (4): 341–353, CiteSeerX   10.1.1.53.6729 , doi:10.1007/PL00009265, MR   1673395, S2CID   26779824 .

Related Research Articles

<span class="mw-page-title-main">Entropy (information theory)</span> Expected amount of information needed to specify the output of a stochastic data source

In information theory, the entropy of a random variable quantifies the average level of uncertainty or information associated with the variable's potential states or possible outcomes. This measures the expected amount of information needed to describe the state of the variable, considering the distribution of probabilities across all potential states. Given a discrete random variable , which takes values in the set and is distributed according to , the entropy is where denotes the sum over the variable's possible values. The choice of base for , the logarithm, varies for different applications. Base 2 gives the unit of bits, while base e gives "natural units" nat, and base 10 gives units of "dits", "bans", or "hartleys". An equivalent definition of entropy is the expected value of the self-information of a variable.

In computational complexity theory, the class NC (for "Nick's Class") is the set of decision problems decidable in polylogarithmic time on a parallel computer with a polynomial number of processors. In other words, a problem with input size n is in NC if there exist constants c and k such that it can be solved in time O((log n)c) using O(nk) parallel processors. Stephen Cook coined the name "Nick's class" after Nick Pippenger, who had done extensive research on circuits with polylogarithmic depth and polynomial size.

In computational complexity theory, the complexity class #P (pronounced "sharp P" or, sometimes "number P" or "hash P") is the set of the counting problems associated with the decision problems in the set NP. More formally, #P is the class of function problems of the form "compute f(x)", where f is the number of accepting paths of a nondeterministic Turing machine running in polynomial time. Unlike most well-known complexity classes, it is not a class of decision problems but a class of function problems. The most difficult, representative problems of this class are #P-complete.

In theoretical computer science, communication complexity studies the amount of communication required to solve a problem when the input to the problem is distributed among two or more parties. The study of communication complexity was first introduced by Andrew Yao in 1979, while studying the problem of computation distributed among several machines. The problem is usually stated as follows: two parties each receive a -bit string and . The goal is for Alice to compute the value of a certain function, , that depends on both and , with the least amount of communication between them.

The Deutsch–Jozsa algorithm is a deterministic quantum algorithm proposed by David Deutsch and Richard Jozsa in 1992 with improvements by Richard Cleve, Artur Ekert, Chiara Macchiavello, and Michele Mosca in 1998. Although of little practical use, it is one of the first examples of a quantum algorithm that is exponentially faster than any possible deterministic classical algorithm.

<span class="mw-page-title-main">Complexity class</span> Set of problems in computational complexity theory

In computational complexity theory, a complexity class is a set of computational problems "of related resource-based complexity". The two most commonly analyzed resources are time and memory.

Secure multi-party computation is a subfield of cryptography with the goal of creating methods for parties to jointly compute a function over their inputs while keeping those inputs private. Unlike traditional cryptographic tasks, where cryptography assures security and integrity of communication or storage and the adversary is outside the system of participants, the cryptography in this model protects participants' privacy from each other.

In computational complexity theory, the polynomial hierarchy is a hierarchy of complexity classes that generalize the classes NP and co-NP. Each class in the hierarchy is contained within PSPACE. The hierarchy can be defined using oracle machines or alternating Turing machines. It is a resource-bounded counterpart to the arithmetical hierarchy and analytical hierarchy from mathematical logic. The union of the classes in the hierarchy is denoted PH.

<span class="mw-page-title-main">Boolean function</span> Function returning one of only two values

In mathematics, a Boolean function is a function whose arguments and result assume values from a two-element set. Alternative names are switching function, used especially in older computer science literature, and truth function, used in logic. Boolean functions are the subject of Boolean algebra and switching theory.

In computational complexity theory, Polynomial Local Search (PLS) is a complexity class that models the difficulty of finding a locally optimal solution to an optimization problem. The main characteristics of problems that lie in PLS are that the cost of a solution can be calculated in polynomial time and the neighborhood of a solution can be searched in polynomial time. Therefore it is possible to verify whether or not a solution is a local optimum in polynomial time. Furthermore, depending on the problem and the algorithm that is used for solving the problem, it might be faster to find a local optimum instead of a global optimum.

In computational complexity theory, Yao's principle relates the performance of randomized algorithms to deterministic (non-random) algorithms. It states that, for certain classes of algorithms, and certain measures of the performance of the algorithms, the following two quantities are equal:

<span class="mw-page-title-main">Circuit complexity</span> Model of computational complexity

In theoretical computer science, circuit complexity is a branch of computational complexity theory in which Boolean functions are classified according to the size or depth of the Boolean circuits that compute them. A related notion is the circuit complexity of a recursive language that is decided by a uniform family of circuits .

A locally testable code is a type of error-correcting code for which it can be determined if a string is a word in that code by looking at a small number of bits of the string. In some situations, it is useful to know if the data is corrupted without decoding all of it so that appropriate action can be taken in response. For example, in communication, if the receiver encounters a corrupted code, it can request the data be re-sent, which could increase the accuracy of said data. Similarly, in data storage, these codes can allow for damaged data to be recovered and rewritten properly.

In mathematics, a symmetric Boolean function is a Boolean function whose value does not depend on the order of its input bits, i.e., it depends only on the number of ones in the input. For this reason they are also known as Boolean counting functions.

In Boolean algebra, a parity function is a Boolean function whose value is one if and only if the input vector has an odd number of ones. The parity function of two inputs is also known as the XOR function.

<span class="mw-page-title-main">Decision tree model</span> Model of computational complexity

In computational complexity theory, the decision tree model is the model of computation in which an algorithm can be considered to be a decision tree, i.e. a sequence of queries or tests that are done adaptively, so the outcome of previous tests can influence the tests performed next.

A locally decodable code (LDC) is an error-correcting code that allows a single bit of the original message to be decoded with high probability by only examining a small number of bits of a possibly corrupted codeword. This property could be useful, say, in a context where information is being transmitted over a noisy channel, and only a small subset of the data is required at a particular time and there is no need to decode the entire message at once. Locally decodable codes are not a subset of locally testable codes, though there is some overlap between the two.

Garbled circuit is a cryptographic protocol that enables two-party secure computation in which two mistrusting parties can jointly evaluate a function over their private inputs without the presence of a trusted third party. In the garbled circuit protocol, the function has to be described as a Boolean circuit.

The Hidden Matching Problem is a computation complexity problem that can be solved using quantum protocols: Let be a positive even integer. In the Hidden Matching Problem, Alice is given and Bob is given ( denotes the family of all possible perfect matchings on nodes). Their goal is to output a tuple such that the edge belongs to the matching and .

In computational complexity, the sensitivity theorem, proved by Hao Huang in 2019, states that the sensitivity of a Boolean function is at least the square root of its degree, thus settling a conjecture posed by Nisan and Szegedy in 1992. The proof is notably succinct, given that prior progress had been limited.