Category Archives: computer science

computer science expository math

Ask the Experts

Each day, you must make a binary decision: to buy or sell a stock, whether to bring an umbrella to work, whether to take surface streets or the freeway on your commute, etc. Each time you make the wrong decision, you are penalized. Fortunately you have help in the form of \(n\) “expert” advisors each of whom suggest a choice for you. However, some experts are more reliable than others, and you do not initially know which is the most reliable. How can you find the most reliable expert while incurring the fewest penalties? How few penalties can you incur compared to the best expert?

To make matters simple, first assume that the best expert never gives the wrong advice. How quickly can we find that expert?

Here is a simple procedure that allows us to find the best expert. Choose an expert \(a\) arbitrarily on the first day, and follow their advice. Continue following \(a\)’s advice each day until they make a mistake. After your chosen expert errs, choose a different expert and follow their advice until they err. Continue this process indefinitely. Since we’ve assumed that the best expert never errs, you will eventually follow their advice.

Observe that in the above procedure, each day we either get the correct advice, or we eliminate one of the errant experts. Thus, the total number of errors we make before finding the best expert is \(n – 1\). Is it possible to find the best expert with fewer penalties?

Consider the following “majority procedure.” Each day, look at all \(m\) experts’ advice and take the majority opinion (choosing arbitrarily if there is a tie). At the end of the day, fire all incorrect experts, and continue the next day with the remaining experts. To analyze this procedure, observe that each day, either the majority answer is correct (and you are not penalized) or you fire at least half of the experts. Thus, the number of penalties you incur is at most \(\log n\) before you find the best expert. That is pretty good!

Question 1 Is it possible to find the best expert while incurring fewer than \(\log n\) penalties?

To this point, things have been relatively easy because we assumed that the best expert never errs. Thus, as soon as an expert makes an incorrect prediction, we know they are not the best expert. In practice, however, it is unlikely that the best expert is infallable. What if we are only guaranteed that the best expert errs at most \(m\) times?

It is not hard to see that the majority procedure can be generalized to account for an expert that errs at most \(m\) times. The idea is to follow the majority opinion, but to allow an expert to err \(m\) times before firing them. We call the resulting procedure the “\(m + 1\) strikes majority procedure.” Each day, follow the majority opinion (again breaking ties arbitrarily). Every time an expert is wrong, they receive a strike. If an expert ever exceeds \(m\) strikes, they are immediately fired.

To analyze the \(m + 1\) strikes procedure, observe that if the majority opinion is ever wrong, at least half of the remaining experts receive strikes. Since the total number of strikes of all remaining experts cannot exceed \(m n\), after \(2 m + 1\) majority errors, we know that at least half of the experts have been fired. Thus, we find the best expert with penalty at most \((2 m + 1) \log n\). (I suspect that a more clever analysis would give something like \((m + 1) \log n\), thus matching the majority procedure for \(m = 0\).) Can we do better than the \(m + 1\) strikes majority procedure?

One objection that one might have with the \(m + 1\) strikes procedure is that an expert must err \(m + 1\) times before any action is taken against them. Another reasonable approach would be to have a “confidence” parameter for each expert. Each time the expert errs, their confidence is decreased. When taking a majority decision, each expert’s prediction is then weighted by our confidence in that expert. If we adopt this strategy, we must specify how the confidence is decreased with each error.

Littlestone and Warmuth proposed the following “weighted majority algorithm” to solve this problem. Initially, each expert \(i\) is assigned a weight \(w_i = 1\), where a larger weight indicates a greater confidence in expert \(i\). Each time expert \(i\) errs, \(w_i\) is multiplied by \(1/2\). Each day, we take as our prediction the weighted majority–each expert \(i\)’s prediction carries \(w_i\) votes, and we choose the option receiving the greatest sum of votes.

By way of analysis, consider what happens each time the majority answer is wrong. Let \(W_t\) denote the total weight of all experts after \(t\) penalties have been incurred. Observe that initially we have \(W_0 = n\). When we incur the \((t + 1)\)-st penalty, at least half of the \(W_t\) total votes were cast in favor of the wrong answer. Since the total errant weight is cut in half, we find that \(W_{t + 1} \leq (3/4) W_t\). Thus, \(W_t \leq (3/4)^t n\). On the other hand, we know that some expert \(i\) errs at most \(m\) times. Therefore, \(2^{-m} \leq w_i \leq W_t\) for all \(t\). Combining this with the previous expression gives
2^{-m} \leq W_t \leq (3/4)^t n.
Taking logs of both sides of this expression, we find that
t \leq \frac{(m + \log n)}{\log(4/3)}.
It seems incredible to me that this simple and elegant algorithm gives such a powerful result. Even if we have initially no idea as to which expert is the best, we can still perform nearly as well as the best expert as long as \(m >> \log n\). (Note that \(1 / \log(4/3) \approx 2.4\).) Further, we do not even need to know what \(m\) is in advance to apply this procedure!

Question 2 Is it possible to improve upon the bound of the weighted majority algorithm?

computer science edutainment math teaching

Numberphile video on the Josephus Problem

Recently, the following Numberphile video on the Josephus Problem has been making the rounds on math-related social media. I watched the video, and I thought Daniel Erman did a remarkably good job at explaining how to solve a mathematical problem. Daniel’s approach is similar to the techniques described in Polya‘s “How to Solve It.” Yet the particular story that Daniel tells also has an appealing narrative arc.

Daniel’s video adheres to the following principles, which I think are fairly universal in mathematical problem solving.

  1. Start with a concrete problem. If the problem has a nice story to go along with it, all the better. The Josephus Problem is a great example of a concrete mathematical question. Given a method by which the soldiers kill one another and the number of soldiers, where should Josephus stand to be the last living soldier?
  2. Formalize and generalize the problem. What is special about the number 41? The mechanism by which the soldiers kill one another works just as well for any number of soldiers, so consider the problem for \(n\) soldiers.
  3. Consider simpler versions of the general problem. Now that we have the general \(n\)-soldier Josephus problem, we can easily work out a few examples when \(n\) is small. To quote Polya, “If you can’t solve a problem, then there is an easier problem you can’t solve: find it.” This process of finding simpler and simpler related problems until you find one you can solve is to me the most important general problem solving method.
  4. Solve enough of the “simple” problems until you see a pattern. Solving the simpler problems gives one both data and intuition that will allow you to conjecture about a general solution.
  5. Generalize the pattern as much as you can so that it fits the examples you’ve solved. Even if the pattern doesn’t give a complete answer (for example, Daniel’s observation that if \(n\) is a power of \(2\), soldier \(1\) is the last living soldier), even a partial solution is likely valuable to understanding a complete solution.
  6. Prove your generalization of the pattern to obtain a solution to the general problem. Often, this doesn’t happen all at once. The Numberphile video happens to give a particularly elegant solution in a very short period of time. Don’t get discouraged when not everything falls into place the first time you try to solve the problem!
  7. Apply your general solution to the original concrete problem.

In my own research, I follow the strategies above. In particular, Polya’s advice regarding finding and solving simpler problems (steps 3 and 4) is maybe the most valuable single piece of problem solving advice I know of. I think math could be characterized as the art of generalizing simple observations. Often, the simple observations arise by wasting a lot of paper trying to solve simple problems.

The narrative outlined in the steps above is also valuable from a pedagogic standpoint. By starting with a tangible (if slightly morbid) problem, the student/participant immediately has some intuition about the problem before beginning formal analysis. In my experience, one of the biggest challenges students face is connecting abstract statement and theorems to concrete problems. By introducing the concrete problem first and using this problem to motivate the formal development needed to solve the problem, students can begin using their imagination earlier in the problem solving process. This makes learning more interactive, memorable, and effective.

computer science expository math

Testing Equality in Networks

Yesterday, I went to an interesting talk by Klim Efremenko about testing equality in networks. The talk was based on his joint paper with Noga Alon and Benny Sudakov. The basic problem is as follows. Suppose there is a network with \(k\) nodes, and each node \(v\) has an input in the form of an \(n\)-bit string \(M_v \in \{0, 1\}^n\). All of the nodes in the network want to verify that all of their strings are equal, i.e., that \(M_v = M_u\) for all nodes \(v\) and \(u\), and they may only communicate with their neighbors in the network. How many bits must be communicated in total?

For concreteness, suppose the network is a triangle. That is, there are three nodes–say played by Alice, Bob, and Carol–and each pair of nodes can communicate. Let’s first consider some upper bounds on the number of bits sent to check that all inputs are the same. The simplest thing to do would be for each player to send their input to the other two players. After this step, everyone knows everyone else’s input, so each player can check that all of the inputs are the same. The total communication required for this protocol is \(6 n\), as each player sends their entire input to the two other players.

It is not hard to see that this procedure is wasteful. It is enough that one player–say Alice–knows whether all of the inputs are the same, and she can tell the other players as much. Thus, we can make a more efficient procedure. Bob and Carol both send their inputs to Alice, and Alice checks if both inputs are the same as her input. She sends a single bit back to Bob and Carol depending on if all inputs are equal. The total communication required for this procedure is \(2 n + 2\). So we’ve improved the communication cost by a factor of \(3\)–not too shabby. But can we do better?

Let’s talk a little bit about lower bounds. To this end, consider the case where there are only two players, Alice and Bob. As in the previous paragraph, they can check that their inputs are equal using \(n + 1\) bits of communication. To see that \(n\) bits are in fact required, we can use the “fooling set” technique from communication complexity. Suppose to the contrary that Alice and Bob can check equality with \(b < n\) bits of communication. For any \(x \in \{0, 1\}^n\), consider the case where both Alice and Bob have input \(x\). Let \(m_x\) be the ``transcript'' of Alice and Bob's conversation when they both have \(x\) as their input. By assumption, the transcript contains at most \(b < n\) bits. Therefore, there are at most \(2^b\) distinct transcripts. Thus, by the pigeonhole principle, there are two values \(x \neq y\) that give the same transcript: \(m_x = m_y\). Now suppose Alice is given input \(x\) and Bob has input \(y\). In this case, when Alice and Bob communicate, they will generate the same transcript \(m_x (= m_y)\). Since the communication transcript determines whether Alice and Bob think they have the same input, they will both be convinced their inputs are the same, contradicting the assumption that \(x \neq y\)! Therefore, we must have \(b \geq n\)--Alice and Bob need to communicate at least \(n\) bits to check equality.

To obtain lower bounds for three players, we can use the same ideas as the two player lower bound of \(n\) bits. Assume for a moment that Bob and Carol know they have the same input. How much must Alice communicate with Bob/Carol to verify that her input is the same as theirs? By the argument in the previous paragraph, Alice must exchange a total of \(n\) bits with Bob/Carol. Similarly, Bob must exchange \(n\) bits with Alice/Carol, and Carol must exchange \(n\) bits with Alice/Bob. So the total number of bits exchanged must be at least \(3 n / 2\). The factor of \(1/2\) occurs because we count each bit exchanged between, say Alice and Bob, twice: once when we consider the communication between Alice and Bob/Carol and once when we consider the communication between Bob and Alice/Carol.

As it stands we have a gap in the communication necessary to solve the problem: we have an upper bound of \(2 n + 2\) bits, and a lower bound of \(3 n / 2\) bits. Which of these bounds is correct? Or is the true answer somewhere in between? Up to this point, the techniques we’ve used to understand the problem are fairly routine (at least to those who have studied some communication complexity). In what follows–the contribution of Klim, Noga, and Benny–we will see that it is possible match the lower bound of \(3 n / 2\) bits using a very clever encoding of the inputs.

The main tool that the authors employ is the existence of certain family of graphs, which I will refer to as Rusza-Szemeredi graphs. Here is the key lemma:

Lemma (Rusza-Szemeredi, 1978). For every \(m\), there exists a tri-partite graph \(H\) on \(3 m\) vertices which contains \(m^2 / 2^{O(\sqrt{\log m})}\) triangles such that no two triangles share a common edge.

The lemma says that there is a graph with a lot of triangles such that each triangle is determined by any one of its edges. To see how this lemma is helpful (or even relevant) to the problem of testing equality, consider the following modification of the original problem. Instead of being given a string in \(\{0, 1\}^n\), each player is player is given a triangle in the graph \(H\) from the lemma. Each triangle consists of \(3\) vertices, but the condition of the lemma ensures that each triangle is determined by any two of the three vertices–i.e., a single edge of the triangle. Thus, the three players can verify that they all have the same triangle by sharing information in the following way: Alice sends Bob the first vertex in her triangle; Bob sends Carol the second vertex in his triangle, and Carol sends Alice the third vertex in her triangle.

Miraculously, this procedure reveals enough information for Alice, Bob, and Carol to determine if they were all given the same triangle! Suppose, for example, that Alice and Bob have the same triangle \(T\), but Carol has a triangle \(T’ \neq T\). By the condition of the lemma, \(T’\) and \(T\) can share at most one vertex, so they must differ in at least two vertices. In particular, it must be the case that the second or third (or both) vertices of \(T\) and \(T’\) differ. If the second vertex of \(T\) and \(T’\) differ, then Carol will see that her triangle is not the same as Bob’s triangle (since Bob sends Carol his second vertex). If the third vertex of \(T\) and \(T’\) differ, then Alice will see that her triangle differs from Carol’s, as Carol sends Alice her triangle’s third vertex.

Now consider the case where Alice, Bob, and Carol are all given different triangles \(T_a = \{u_a, v_a, w_a\}\), \(T_b = \{u_b, v_b, w_b\}\), and \(T_c = \{u_c, v_c, w_c\}\). Suppose that all three players accept the outcome of the communication protocol. This means that \(u_a = u_b\) (since Alice sends \(u_a\) to Bob), and similarly \(v_b = v_c\) and \(w_c = w_a\). In particular this implies that \(H\) contains the edges
\{u_a, v_b\} \in T_b, \{v_b, w_a\} \in T_c, \{w_a, u_a\} \in T_a.
Together these three edges form form a triangle \(T’\) which is present in the graph \(H\). However, observe that \(T’\) shares an edge \(\{w_a, u_a\}\) with \(T_a\), contradicting the property of \(H\) guaranteed by the lemma. Therefore, Alice, Bob, and Carol cannot all accept if they are each given different triangles! Thus they can determine if they were all given the same triangle by each sending a single vertex of their triangle, which requires \(\log m\) bits.

To solve the original problem–where each player is given an \(n\)-bit string–we encode each string as a triangle in a suitable graph \(H\). In order to make this work, we need to choose \(m\) (the number of vertices in \(H\)) large enough that there is one triangle for each possible string. Since there are \(2^n\) possible strings, using the lemma, we need to take \(n\) sufficiently large that
m^2 / 2^{O(\sqrt{\log m})} \geq 2^n.
Taking the logarithm of both sides, we find that
\log m – O(\sqrt{\log m}) \geq \frac 1 2 n
is large enough. Thus, to send the identity of a single vertex of the triangle which encodes a player’s input, they must send \(\log m\) bits, which is roughly \(\frac 1 2 n\). Therefore, the total communication in this protocol is roughly \(\frac 3 2 n\), which matches the lower bounds. We summarize this result in the following theorem.

Theorem. Suppose \(3\) players each hold \(n\)-bit strings. Then
\frac 3 2 n + O(\sqrt{n}).
bits of communication are necessary and sufficient to test if all three strings are equal.

The paper goes on to generalize this result, but it seems that all of the main ideas are already present in the \(3\) player case.

computer science math music

Audio Fingerprinting

The first time I saw the Shazam app, I was floored. The app listens to a clip of music through your phone’s microphone, and after a few seconds it is able to identify the recording. True to its name, the software works like magic. Even with significant background noise (for instance, in a noisy bar) the app can recognize that Feels Like We Only Go Backwards is playing on the jukebox.

I recently came across a fantastic writeup by Will Drevo about how “audio fingerprinting” schemes are able to correctly identify a recording. Will wrote an open source Python library called dejavu for audio fingerprinting. His writeup gives a detailed overview of how the software works. Will’s explanation is a gem of expository writing about a technical subject. He gives an overview of the requisite knowledge of signal processing necessary to understand how the fingerprinting works, then describes the scheme itself in some detail. Essentially the software works in four steps:

  1. Build a spectrogram of a recording.
  2. Find “peaks” in the spectrogram.
  3. Identify patterns or “constellations” among the peaks which will serve as fingerprints for the recording.
  4. Apply a hash function to the fingerprints to compress them to a more reasonable size.

Will’s explanation of how and why these steps work to give a robust way of recognizing audio recordings is vivid and accessible. It is a remarkable narrative about how signal processing, optimization, and data structures combine to create a truly miraculous product.

computer science expository math musings

John Nash, Cryptography, and Computational Complexity

Recently, the brilliant mathematician John Nash and his wife were killed in a car crash. While Nash was probably most famous for his pioneering work on game theory and his portrayal in Ron Howard‘s popular film A Beautiful Mind, he also worked in the field of cryptography. Several years ago, the NSA declassified several letters from 1955 between Nash and the NSA wherein Nash describes some ideas for an encryption/decryption scheme. While the NSA was not interested in the particular scheme devised by Nash, it seems that Nash foresaw the importance of computational complexity in the field of cryptography. In the letters, Nash states:

Consider the enciphering process with a finite “key,” operating on binary messages. Specifically, we can assume the process [is] described by a function
y_i = F(\alpha_1, \alpha_2, \ldots, \alpha_r; x_i, x_{i-1}, \ldots, x_{i-n})
where the \(\alpha\)’s, \(x\)’s and \(y\)’s are mod 2 and if \(x_i\) is changed with the other \(x\)’s and \(\alpha\)’s left fixed then \(y_i\) is changed. The \(\alpha\)’s denote the “key” containing \(r\) bits of information. \(n\) is the maximum span of the “memory” of the process…

…We see immediately that in principle the enemy needs very little information to break down the process. Essentially, as soon as \(r\) bits of enciphered message have been transmitted the key is about determined. This is no security, for a practical key should not be too long. But this does not consider how easy or difficult it is for the enemy to make the computation determining the key. If this computation, although always possible in principle, were sufficiently long at best the process could still be secure in a practical sense.

Nash goes on to say that

…a logical way to classify the enciphering process is the way in which the computation length for the computation on the key increases with increasing length of the key… Now my general conjecture is as follows: For almost all sufficiently complex types of enciphering…the mean key computation length increases exponentially with the length of the key.

The significance of this general conjecture, assuming its truth, is easy to see. It means that it is quite feasible to design ciphers that are effectively unbreakable.

To my knowledge, Nash’s letter letter is the earliest reference to using computational complexity to achieve practical cryptography. The idea is that while it is theoretically possible to decrypt an encrypted message without the key, doing so requires a prohibitive amount of computational resources. Interestingly, Nash’s letter predates an (in)famous 1956 letter from Kurt Godel to John von Neumann which is widely credited as being the first reference to the “P vs NP problem.” However the essential idea of the P vs NP problem is nascent in Nash’s conjecture: there are problems whose solution can efficiently be verified, but finding such a solution is computationally intractable. Specifically, a message can easily be decrypted if one knows the key, but finding the key to decrypt the message is hopelessly difficult.

The P vs NP problem was only formalized 16 years after Nash’s letters by Stephen Cook in his seminal paper, The complexity of theorem-proving procedures. In fact, Nash’s conjecture is strictly stronger than the P vs NP problem–its formulation is more akin to the exponential time hypothesis, which was only formulated in 1999!

Concerning Nash’s conjecture, he was certainly aware of the difficulty of its proof:

The nature of this conjecture is such that I cannot prove it, even for a special type of cipher. Nor do I expect it to be proven. But this does not destroy its significance. The probability of the truth of the conjecture can be guessed at on the basis of experience with enciphering and deciphering.

Indeed, the P vs NP problem remains among the most notorious open problems in mathematics and theoretical computer science.

PDF of Nash’s Letters to the NSA.

computer science math musings

Match Day, 2015

Yesterday, March 20th, was Match Day 2015, when graduating medical school students are matched with residency programs. The problem of matching residents to hospitals was one of the primary applications that motivated Gale and Shapley to formalize the stable marriage problem back in 1962. In fact, the National Residency Matching Program uses a variant of Gale and Shapley’s original algorithm which they describe in their seminal paper, College Admissions and the Stability of Marriage.

Azure Gilman just published a wonderful article (full disclosure: Azure interviewed me for the article) on this year’s Match Day festivities at Columbia University. One thing I find fascinating about Match Day is that it exposes a very tangible impact that algorithm design can have on our lives. Typically in the computer science setting, we discuss only the computational complexity of an abstract problem, or the efficiency of an algorithm designed to solve a problem. But Match Day gives a prescient example of how algorithm design can impact the very trajectories of people’s lives.

computer science expository math

Shorter Certificates for Set Disjointness

Suppose two players, Alice and Bob, each hold equal sized subsets of \([n] = \{1, 2, \ldots, n\}\). A third party, Carole, wishes to convince Alice and Bob that their subsets are disjoint, i.e., their sets have no elements in common. How efficiently can Carole prove to Alice and Bob that their sets are disjoint?

To formalize the situation, suppose \(k\) is an integer with \(1 < k < n/2\). Alice holds a set \(A \subseteq [n]\) with \(|A| = k\) and similarly Bob holds \(B \subseteq [n]\) with \(|B| = k\). We assume that Carole sees the sets \(A\) and \(B\), but Alice and Bob have no information about each others' sets. We allow Carole to send messages to Alice and Bob, and we allow Alice and Bob to communicate with each other. Our goal is to minimize the total amount of communication between Carole, Alice, and Bob. In particular, we will consider communication protocols of the following form:

  1. Carole produces a certificate or proof that \(A \cap B = \emptyset\) and sends this certificate to Alice and Bob.
  2. Alice and Bob individually verify that Carole’s certificate is valid with respect to their individual inputs. Alice and Bob then send each other (very short) messages saying whether or not they accept Carole’s proof. If they both accept the proof, they can be certain that their sets are disjoint
  3. In order for the proof system described above to be valid, it must satisfy the following two properties:

    Completeness If \(A \cap B = \emptyset\) then Carole can send a certificate that Alice and Bob both accept.

    Soundness If \(A \cap B \neq \emptyset\) then any certificate that Carole sends must be rejected by at least one of Alice and Bob.

    Before giving a “clever” solution to this communication problem, we describe a naive solution. Since Carole sees \(A\) and \(B\), her proof of their disjointness could simply be to send Alice and Bob the (disjoint) pair \(C = (A, B)\). Then Alice verifies the validity of the certificate \(C\) by checking that her input \(A\) is equal to the first term in \(C\), and similarly Bob checks that \(B\) is equal to the second term. Clearly, if \(A\) and \(B\) are disjoint, Alice and Bob will both accept \(C\), while if \(A\) and \(B\) intersect, Alice or Bob will reject every certificate that Carole could send.

    Let us quickly analyze the efficiency of this protocol. The certificate that Carole sends consists of a pair of \(k\)-subsets of \([n]\). The naive encoding of simply listing the elements of \(A\) and \(B\) requires \(2 k \log n\) bits — each \(i \in A \cup B\) requires \(\log n\) bits, and there are \(2 k\) such indices in the list. In fact, even if Carole is much cleverer in her encoding of the sets \(A\) and \(B\), she cannot compress the proof significantly for information theoretic reasons. Indeed, there are
    {n \choose k}{n-k \choose k}
    distinct certificates that Carole must be able to send, hence her message must be of length at least
    \log {n \choose k} \geq \log ((n/k)^k) = k \log n – k \log k.
    Is it possible for Carole, Alice, and Bob to devise a more efficient proof system for disjointness?

    read more »

computer science expository math musings

The Aura of Interactive Proofs

In his essay The Work of Art in the Age of Mechanical Reproduction, Walter Benjamin introduces the idea that original artwork has an aura — some ineffable something that the work’s creator imbues into the work, but which is lost in reproductions made by mechanical means. There is something unique about an original work. Let us imagine that Walter is able to read the aura of work of art, or sense its absence. Thus, he has the seemingly magical ability to tell original art from mechanical forgery.

Andy Warhol is skeptical of Walter’s claims. Andy doesn’t believe in the aura. And even if the aura does exist, he sincerely doubts that Walter is able to detect its presence. In an attempt to unmask Walter for the fraud he undoubtedly he is, Andy hatches a cunning plan to catch Walter in a lie. By hand, he paints an original painting, then using the most advanced technology available, makes a perfect replica of the original. Although the replica looks exactly like the original to the layman (and even Andy himself), according to Walter, there is something missing in the replica.

Andy’s original plan was to present Walter with the two seemingly identical paintings and simply ask which is the original. He soon realized, however, that this approach would be entirely unsatisfactory for his peace of mind. If Walter picked the true original, Andy still couldn’t be entirely convinced of Walter’s powers: maybe Walter just guessed and happened to guess correctly! How can Andy change his strategy in order to (1) be more likely to catch Walter in his lie (if in fact is he lying) and (2) be more convinced to Walter’s supernatural abilities if indeed he is telling the truth?

read more »

computer science expository math

Factor Characterization of Matrix Rank

I am currently taking a course on communication complexity with Alexander Sherstov. Much of communication complexity involves matrix analysis, so yesterday we did a brief review of results from linear algebra. In the review, Sherstov gave the following definition for the rank of a matrix \(M \in \mathbf{F}^{n \times m}\):
\mathrm{rank}(M) = \min\{k | M = A B, A \in \mathbf{F}^{n \times k}, B \in \mathbf{F}^{k \times m}\}
Since this “factor” definition of rank appears very different from the standard definition given in a linear algebra course, I thought I would prove that the two definitions are equivalent.

The “standard” definition of rank is
\mathrm{rank}(M) = \mathrm{dim}\ \mathrm{span} \{\mathbf{v}_1, \mathbf{v}_2, \ldots, \mathbf{v}_m\}
where \(\mathbf{v}_1, \mathbf{v}_2, \ldots, \mathbf{v}_m\) are the columns of \(M\). Equivalently, \(\mathrm{rank}(M)\) is the number of linearly independent columns of \(M\).

An easy consequence of the standard definition of rank is that the rank of a product of matrices is at most the rank of the first matrix: \(\mathrm{rank}(A B) \leq \mathrm{rank}(A)\). Indeed, the columns of \(AB\) are linear combinations of the columns of \(A\), hence the span of the columns of \(AB\) is a subspace of the span of the columns of \(A\). Thus, if we can write \(M = A B\) with \(A \in \mathbf{F}^{n \times k}\) and \(B \in \mathbf{F}^{k \times m}\), we must have \(\mathrm{rank}(M) \leq k\).

It remains to show that if \(\mathrm{rank}(M) = k\) then we can factor \(M = A B\) with \(A \in \mathbf{F}^{n \times k}\) and \(B \in \mathbf{F}^{k \times m}\). To this end, assume without loss of generality that the first \(k\) columns of \(M\) are linearly independent. Write these columns as vectors
\(\mathbf{v}_1, \mathbf{v}_2, \ldots, \mathbf{v}_k\). Denote the remaining columns by \(\mathbf{w}_1, \mathbf{w}_2, \ldots, \mathbf{w}_{m-k}\). Since these vectors lie in the span of the first \(k\) columns of \(M\), we can write the \(\mathbf{w}_j\) as linear combinations of the \(\mathbf{v}_i\):
\mathbf{w}_j = \sum_{i = 1}^k b_{i, j} \mathbf{v}_i \quad\text{for}\quad j = 1, 2, \ldots, m – k.
Now define the matrices
A = (\mathbf{v}_1\ \mathbf{v}_2\ \cdots\ \mathbf{v}_k)
B = (\mathbf{e}_1\ \mathbf{e}_2\ \cdots\ \mathbf{e}_k\ \mathbf{b}_1\ \mathbf{b}_2\ \cdots\ \mathbf{b}_{m – k})
where \(\mathbf{e}_j\) is the \(j\)th standard basis vector in \(\mathbf{F}^k\) and
\mathbf{b}_j =
b_{1, j}\\
b_{2, j}\\
b_{k, j}
\quad\text{for}\quad j = 1, 2, \ldots, m – k.
It is straightforward to verify that we have \(M = A B\), which proves the equivalence of the two definitions of rank.

computer science expository math

Introduction to Communication Complexity

I just uploaded an essay “Introduction to Communication Complexity” to my website. The essay is a brief introduction to communication complexity, a field which I find fascinating. The essay is self-contained and completely elementary. It should be accessible to anyone who knows what the words “set,” “function,” and “tree” mean.