Tag Archives: computational complexity

computer science expository math

Testing Equality in Networks

Yesterday, I went to an interesting talk by Klim Efremenko about testing equality in networks. The talk was based on his joint paper with Noga Alon and Benny Sudakov. The basic problem is as follows. Suppose there is a network with \(k\) nodes, and each node \(v\) has an input in the form of an \(n\)-bit string \(M_v \in \{0, 1\}^n\). All of the nodes in the network want to verify that all of their strings are equal, i.e., that \(M_v = M_u\) for all nodes \(v\) and \(u\), and they may only communicate with their neighbors in the network. How many bits must be communicated in total?

For concreteness, suppose the network is a triangle. That is, there are three nodes–say played by Alice, Bob, and Carol–and each pair of nodes can communicate. Let’s first consider some upper bounds on the number of bits sent to check that all inputs are the same. The simplest thing to do would be for each player to send their input to the other two players. After this step, everyone knows everyone else’s input, so each player can check that all of the inputs are the same. The total communication required for this protocol is \(6 n\), as each player sends their entire input to the two other players.

It is not hard to see that this procedure is wasteful. It is enough that one player–say Alice–knows whether all of the inputs are the same, and she can tell the other players as much. Thus, we can make a more efficient procedure. Bob and Carol both send their inputs to Alice, and Alice checks if both inputs are the same as her input. She sends a single bit back to Bob and Carol depending on if all inputs are equal. The total communication required for this procedure is \(2 n + 2\). So we’ve improved the communication cost by a factor of \(3\)–not too shabby. But can we do better?

Let’s talk a little bit about lower bounds. To this end, consider the case where there are only two players, Alice and Bob. As in the previous paragraph, they can check that their inputs are equal using \(n + 1\) bits of communication. To see that \(n\) bits are in fact required, we can use the “fooling set” technique from communication complexity. Suppose to the contrary that Alice and Bob can check equality with \(b < n\) bits of communication. For any \(x \in \{0, 1\}^n\), consider the case where both Alice and Bob have input \(x\). Let \(m_x\) be the ``transcript'' of Alice and Bob's conversation when they both have \(x\) as their input. By assumption, the transcript contains at most \(b < n\) bits. Therefore, there are at most \(2^b\) distinct transcripts. Thus, by the pigeonhole principle, there are two values \(x \neq y\) that give the same transcript: \(m_x = m_y\). Now suppose Alice is given input \(x\) and Bob has input \(y\). In this case, when Alice and Bob communicate, they will generate the same transcript \(m_x (= m_y)\). Since the communication transcript determines whether Alice and Bob think they have the same input, they will both be convinced their inputs are the same, contradicting the assumption that \(x \neq y\)! Therefore, we must have \(b \geq n\)--Alice and Bob need to communicate at least \(n\) bits to check equality.

To obtain lower bounds for three players, we can use the same ideas as the two player lower bound of \(n\) bits. Assume for a moment that Bob and Carol know they have the same input. How much must Alice communicate with Bob/Carol to verify that her input is the same as theirs? By the argument in the previous paragraph, Alice must exchange a total of \(n\) bits with Bob/Carol. Similarly, Bob must exchange \(n\) bits with Alice/Carol, and Carol must exchange \(n\) bits with Alice/Bob. So the total number of bits exchanged must be at least \(3 n / 2\). The factor of \(1/2\) occurs because we count each bit exchanged between, say Alice and Bob, twice: once when we consider the communication between Alice and Bob/Carol and once when we consider the communication between Bob and Alice/Carol.

As it stands we have a gap in the communication necessary to solve the problem: we have an upper bound of \(2 n + 2\) bits, and a lower bound of \(3 n / 2\) bits. Which of these bounds is correct? Or is the true answer somewhere in between? Up to this point, the techniques we’ve used to understand the problem are fairly routine (at least to those who have studied some communication complexity). In what follows–the contribution of Klim, Noga, and Benny–we will see that it is possible match the lower bound of \(3 n / 2\) bits using a very clever encoding of the inputs.

The main tool that the authors employ is the existence of certain family of graphs, which I will refer to as Rusza-Szemeredi graphs. Here is the key lemma:

Lemma (Rusza-Szemeredi, 1978). For every \(m\), there exists a tri-partite graph \(H\) on \(3 m\) vertices which contains \(m^2 / 2^{O(\sqrt{\log m})}\) triangles such that no two triangles share a common edge.

The lemma says that there is a graph with a lot of triangles such that each triangle is determined by any one of its edges. To see how this lemma is helpful (or even relevant) to the problem of testing equality, consider the following modification of the original problem. Instead of being given a string in \(\{0, 1\}^n\), each player is player is given a triangle in the graph \(H\) from the lemma. Each triangle consists of \(3\) vertices, but the condition of the lemma ensures that each triangle is determined by any two of the three vertices–i.e., a single edge of the triangle. Thus, the three players can verify that they all have the same triangle by sharing information in the following way: Alice sends Bob the first vertex in her triangle; Bob sends Carol the second vertex in his triangle, and Carol sends Alice the third vertex in her triangle.

Miraculously, this procedure reveals enough information for Alice, Bob, and Carol to determine if they were all given the same triangle! Suppose, for example, that Alice and Bob have the same triangle \(T\), but Carol has a triangle \(T’ \neq T\). By the condition of the lemma, \(T’\) and \(T\) can share at most one vertex, so they must differ in at least two vertices. In particular, it must be the case that the second or third (or both) vertices of \(T\) and \(T’\) differ. If the second vertex of \(T\) and \(T’\) differ, then Carol will see that her triangle is not the same as Bob’s triangle (since Bob sends Carol his second vertex). If the third vertex of \(T\) and \(T’\) differ, then Alice will see that her triangle differs from Carol’s, as Carol sends Alice her triangle’s third vertex.

Now consider the case where Alice, Bob, and Carol are all given different triangles \(T_a = \{u_a, v_a, w_a\}\), \(T_b = \{u_b, v_b, w_b\}\), and \(T_c = \{u_c, v_c, w_c\}\). Suppose that all three players accept the outcome of the communication protocol. This means that \(u_a = u_b\) (since Alice sends \(u_a\) to Bob), and similarly \(v_b = v_c\) and \(w_c = w_a\). In particular this implies that \(H\) contains the edges
\{u_a, v_b\} \in T_b, \{v_b, w_a\} \in T_c, \{w_a, u_a\} \in T_a.
Together these three edges form form a triangle \(T’\) which is present in the graph \(H\). However, observe that \(T’\) shares an edge \(\{w_a, u_a\}\) with \(T_a\), contradicting the property of \(H\) guaranteed by the lemma. Therefore, Alice, Bob, and Carol cannot all accept if they are each given different triangles! Thus they can determine if they were all given the same triangle by each sending a single vertex of their triangle, which requires \(\log m\) bits.

To solve the original problem–where each player is given an \(n\)-bit string–we encode each string as a triangle in a suitable graph \(H\). In order to make this work, we need to choose \(m\) (the number of vertices in \(H\)) large enough that there is one triangle for each possible string. Since there are \(2^n\) possible strings, using the lemma, we need to take \(n\) sufficiently large that
m^2 / 2^{O(\sqrt{\log m})} \geq 2^n.
Taking the logarithm of both sides, we find that
\log m – O(\sqrt{\log m}) \geq \frac 1 2 n
is large enough. Thus, to send the identity of a single vertex of the triangle which encodes a player’s input, they must send \(\log m\) bits, which is roughly \(\frac 1 2 n\). Therefore, the total communication in this protocol is roughly \(\frac 3 2 n\), which matches the lower bounds. We summarize this result in the following theorem.

Theorem. Suppose \(3\) players each hold \(n\)-bit strings. Then
\frac 3 2 n + O(\sqrt{n}).
bits of communication are necessary and sufficient to test if all three strings are equal.

The paper goes on to generalize this result, but it seems that all of the main ideas are already present in the \(3\) player case.

computer science expository math musings

John Nash, Cryptography, and Computational Complexity

Recently, the brilliant mathematician John Nash and his wife were killed in a car crash. While Nash was probably most famous for his pioneering work on game theory and his portrayal in Ron Howard‘s popular film A Beautiful Mind, he also worked in the field of cryptography. Several years ago, the NSA declassified several letters from 1955 between Nash and the NSA wherein Nash describes some ideas for an encryption/decryption scheme. While the NSA was not interested in the particular scheme devised by Nash, it seems that Nash foresaw the importance of computational complexity in the field of cryptography. In the letters, Nash states:

Consider the enciphering process with a finite “key,” operating on binary messages. Specifically, we can assume the process [is] described by a function
y_i = F(\alpha_1, \alpha_2, \ldots, \alpha_r; x_i, x_{i-1}, \ldots, x_{i-n})
where the \(\alpha\)’s, \(x\)’s and \(y\)’s are mod 2 and if \(x_i\) is changed with the other \(x\)’s and \(\alpha\)’s left fixed then \(y_i\) is changed. The \(\alpha\)’s denote the “key” containing \(r\) bits of information. \(n\) is the maximum span of the “memory” of the process…

…We see immediately that in principle the enemy needs very little information to break down the process. Essentially, as soon as \(r\) bits of enciphered message have been transmitted the key is about determined. This is no security, for a practical key should not be too long. But this does not consider how easy or difficult it is for the enemy to make the computation determining the key. If this computation, although always possible in principle, were sufficiently long at best the process could still be secure in a practical sense.

Nash goes on to say that

…a logical way to classify the enciphering process is the way in which the computation length for the computation on the key increases with increasing length of the key… Now my general conjecture is as follows: For almost all sufficiently complex types of enciphering…the mean key computation length increases exponentially with the length of the key.

The significance of this general conjecture, assuming its truth, is easy to see. It means that it is quite feasible to design ciphers that are effectively unbreakable.

To my knowledge, Nash’s letter letter is the earliest reference to using computational complexity to achieve practical cryptography. The idea is that while it is theoretically possible to decrypt an encrypted message without the key, doing so requires a prohibitive amount of computational resources. Interestingly, Nash’s letter predates an (in)famous 1956 letter from Kurt Godel to John von Neumann which is widely credited as being the first reference to the “P vs NP problem.” However the essential idea of the P vs NP problem is nascent in Nash’s conjecture: there are problems whose solution can efficiently be verified, but finding such a solution is computationally intractable. Specifically, a message can easily be decrypted if one knows the key, but finding the key to decrypt the message is hopelessly difficult.

The P vs NP problem was only formalized 16 years after Nash’s letters by Stephen Cook in his seminal paper, The complexity of theorem-proving procedures. In fact, Nash’s conjecture is strictly stronger than the P vs NP problem–its formulation is more akin to the exponential time hypothesis, which was only formulated in 1999!

Concerning Nash’s conjecture, he was certainly aware of the difficulty of its proof:

The nature of this conjecture is such that I cannot prove it, even for a special type of cipher. Nor do I expect it to be proven. But this does not destroy its significance. The probability of the truth of the conjecture can be guessed at on the basis of experience with enciphering and deciphering.

Indeed, the P vs NP problem remains among the most notorious open problems in mathematics and theoretical computer science.

PDF of Nash’s Letters to the NSA.

computer science expository math

Shorter Certificates for Set Disjointness

Suppose two players, Alice and Bob, each hold equal sized subsets of \([n] = \{1, 2, \ldots, n\}\). A third party, Carole, wishes to convince Alice and Bob that their subsets are disjoint, i.e., their sets have no elements in common. How efficiently can Carole prove to Alice and Bob that their sets are disjoint?

To formalize the situation, suppose \(k\) is an integer with \(1 < k < n/2\). Alice holds a set \(A \subseteq [n]\) with \(|A| = k\) and similarly Bob holds \(B \subseteq [n]\) with \(|B| = k\). We assume that Carole sees the sets \(A\) and \(B\), but Alice and Bob have no information about each others' sets. We allow Carole to send messages to Alice and Bob, and we allow Alice and Bob to communicate with each other. Our goal is to minimize the total amount of communication between Carole, Alice, and Bob. In particular, we will consider communication protocols of the following form:

  1. Carole produces a certificate or proof that \(A \cap B = \emptyset\) and sends this certificate to Alice and Bob.
  2. Alice and Bob individually verify that Carole’s certificate is valid with respect to their individual inputs. Alice and Bob then send each other (very short) messages saying whether or not they accept Carole’s proof. If they both accept the proof, they can be certain that their sets are disjoint
  3. In order for the proof system described above to be valid, it must satisfy the following two properties:

    Completeness If \(A \cap B = \emptyset\) then Carole can send a certificate that Alice and Bob both accept.

    Soundness If \(A \cap B \neq \emptyset\) then any certificate that Carole sends must be rejected by at least one of Alice and Bob.

    Before giving a “clever” solution to this communication problem, we describe a naive solution. Since Carole sees \(A\) and \(B\), her proof of their disjointness could simply be to send Alice and Bob the (disjoint) pair \(C = (A, B)\). Then Alice verifies the validity of the certificate \(C\) by checking that her input \(A\) is equal to the first term in \(C\), and similarly Bob checks that \(B\) is equal to the second term. Clearly, if \(A\) and \(B\) are disjoint, Alice and Bob will both accept \(C\), while if \(A\) and \(B\) intersect, Alice or Bob will reject every certificate that Carole could send.

    Let us quickly analyze the efficiency of this protocol. The certificate that Carole sends consists of a pair of \(k\)-subsets of \([n]\). The naive encoding of simply listing the elements of \(A\) and \(B\) requires \(2 k \log n\) bits — each \(i \in A \cup B\) requires \(\log n\) bits, and there are \(2 k\) such indices in the list. In fact, even if Carole is much cleverer in her encoding of the sets \(A\) and \(B\), she cannot compress the proof significantly for information theoretic reasons. Indeed, there are
    {n \choose k}{n-k \choose k}
    distinct certificates that Carole must be able to send, hence her message must be of length at least
    \log {n \choose k} \geq \log ((n/k)^k) = k \log n – k \log k.
    Is it possible for Carole, Alice, and Bob to devise a more efficient proof system for disjointness?

    read more »

computer science expository math musings

The Aura of Interactive Proofs

In his essay The Work of Art in the Age of Mechanical Reproduction, Walter Benjamin introduces the idea that original artwork has an aura — some ineffable something that the work’s creator imbues into the work, but which is lost in reproductions made by mechanical means. There is something unique about an original work. Let us imagine that Walter is able to read the aura of work of art, or sense its absence. Thus, he has the seemingly magical ability to tell original art from mechanical forgery.

Andy Warhol is skeptical of Walter’s claims. Andy doesn’t believe in the aura. And even if the aura does exist, he sincerely doubts that Walter is able to detect its presence. In an attempt to unmask Walter for the fraud he undoubtedly he is, Andy hatches a cunning plan to catch Walter in a lie. By hand, he paints an original painting, then using the most advanced technology available, makes a perfect replica of the original. Although the replica looks exactly like the original to the layman (and even Andy himself), according to Walter, there is something missing in the replica.

Andy’s original plan was to present Walter with the two seemingly identical paintings and simply ask which is the original. He soon realized, however, that this approach would be entirely unsatisfactory for his peace of mind. If Walter picked the true original, Andy still couldn’t be entirely convinced of Walter’s powers: maybe Walter just guessed and happened to guess correctly! How can Andy change his strategy in order to (1) be more likely to catch Walter in his lie (if in fact is he lying) and (2) be more convinced to Walter’s supernatural abilities if indeed he is telling the truth?

read more »

computer science

PSIZE contains uncountably many languages

PSIZE is the set of languages recognized by boolean circuits of size polynomial in the input size. Here we give a very simple proof that PSIZE is uncountable.

This is in contrast to the countable set of languages recognized by Turing machines. Indeed, there are only countably many Turing machines, each of which recognizes only one language.

The reason that we expect PSIZE to recognize more languages than Turing machines is the non-uniformity in the definition of PSIZE: we can define a different circuit for inputs of each size. On the other hand, the same Turing machine deciding a language \(L\) must work on all input sizes. In particular, the description of a Turing machine is finite, while the description of a family of circuits can be infinite. We use precisely this property to demonstrate the uncountability PSIZE.

Here, we will prove that, in fact, SIZE(2), the set of languages recognized by circuits with 2 gates is uncountable. Let \(a \in [0, 1]\) be an arbitrary real number in the unit interval with binary expansion
a = 0.a_1 a_2 a_3 \ldots.
Define the language \(L_a\) by
L_a = \{x \in \{0,1\}^n : a_n = 1\}.
That is, \(L_a\) contains all strings of length \(n\) whenever \(a_n = 1\). Since there are uncountably many \(a \in [0, 1]\), there are uncountably many distinct languages \(L_a\). We claim that each \(L_a\) is recognized by a family of circuits of size 2. To see this, we explicitly construct a circuit that recognizes \(L_a\): if \(a_n = 0\) then the \(n\)-th circuit rejects all strings of length \(n\); if \(a_n = 1\) the \(n\)-th circuit accepts all strings of length \(n\). It is easy to create these two circuits using only two gates apiece. For the first, take \(x_1 \wedge \neg x_1\) and for the second take \(x_1 \vee \neg x_1\) where input \(x = x_1 x_2 \ldots x_n\). Thus SIZE(2) is uncountable.

As a consequence of the uncountability of SIZE(2), there must exist (many!) languages in PSIZE which are not recognized by any Turing machine. This is a rather neat result. It turns out that every language in P (the set of languages decided by Turing machines running in polynomial time) is in PSIZE. However, it is conjectured that there are languages decided by Turing machines which are not contained in PSIZE. In fact, there is strong evidence that even NP is not contained in PSIZE. So perhaps despite PSIZE’s power to recognize uncountably many langauges, it is not so powerful as to be uninteresting.