Each day, you must make a binary decision: to buy or sell a stock, whether to bring an umbrella to work, whether to take surface streets or the freeway on your commute, etc. Each time you make the wrong decision, you are penalized. Fortunately you have help in the form of \(n\) “expert” advisors each of whom suggest a choice for you. However, some experts are more reliable than others, and you do not initially know which is the most reliable. How can you find the most reliable expert while incurring the fewest penalties? How few penalties can you incur compared to the best expert?

To make matters simple, first assume that the best expert never gives the wrong advice. How quickly can we find that expert?

Here is a simple procedure that allows us to find the best expert. Choose an expert \(a\) arbitrarily on the first day, and follow their advice. Continue following \(a\)’s advice each day until they make a mistake. After your chosen expert errs, choose a different expert and follow their advice until they err. Continue this process indefinitely. Since we’ve assumed that the best expert never errs, you will eventually follow their advice.

Observe that in the above procedure, each day we either get the correct advice, or we eliminate one of the errant experts. Thus, the total number of errors we make before finding the best expert is \(n – 1\). Is it possible to find the best expert with fewer penalties?

Consider the following “majority procedure.” Each day, look at all \(m\) experts’ advice and take the majority opinion (choosing arbitrarily if there is a tie). At the end of the day, fire all incorrect experts, and continue the next day with the remaining experts. To analyze this procedure, observe that each day, either the majority answer is correct (and you are not penalized) or you fire at least half of the experts. Thus, the number of penalties you incur is at most \(\log n\) before you find the best expert. That is pretty good!

**Question 1** Is it possible to find the best expert while incurring fewer than \(\log n\) penalties?

To this point, things have been relatively easy because we assumed that the best expert never errs. Thus, as soon as an expert makes an incorrect prediction, we know they are not the best expert. In practice, however, it is unlikely that the best expert is infallable. What if we are only guaranteed that the best expert errs at most \(m\) times?

It is not hard to see that the majority procedure can be generalized to account for an expert that errs at most \(m\) times. The idea is to follow the majority opinion, but to allow an expert to err \(m\) times before firing them. We call the resulting procedure the “\(m + 1\) strikes majority procedure.” Each day, follow the majority opinion (again breaking ties arbitrarily). Every time an expert is wrong, they receive a strike. If an expert ever exceeds \(m\) strikes, they are immediately fired.

To analyze the \(m + 1\) strikes procedure, observe that if the majority opinion is ever wrong, at least half of the remaining experts receive strikes. Since the total number of strikes of all remaining experts cannot exceed \(m n\), after \(2 m + 1\) majority errors, we know that at least half of the experts have been fired. Thus, we find the best expert with penalty at most \((2 m + 1) \log n\). (I suspect that a more clever analysis would give something like \((m + 1) \log n\), thus matching the majority procedure for \(m = 0\).) Can we do better than the \(m + 1\) strikes majority procedure?

One objection that one might have with the \(m + 1\) strikes procedure is that an expert must err \(m + 1\) times before any action is taken against them. Another reasonable approach would be to have a “confidence” parameter for each expert. Each time the expert errs, their confidence is decreased. When taking a majority decision, each expert’s prediction is then weighted by our confidence in that expert. If we adopt this strategy, we must specify how the confidence is decreased with each error.

Littlestone and Warmuth proposed the following “weighted majority algorithm” to solve this problem. Initially, each expert \(i\) is assigned a weight \(w_i = 1\), where a larger weight indicates a greater confidence in expert \(i\). Each time expert \(i\) errs, \(w_i\) is multiplied by \(1/2\). Each day, we take as our prediction the weighted majority–each expert \(i\)’s prediction carries \(w_i\) votes, and we choose the option receiving the greatest sum of votes.

By way of analysis, consider what happens each time the majority answer is wrong. Let \(W_t\) denote the total weight of all experts after \(t\) penalties have been incurred. Observe that initially we have \(W_0 = n\). When we incur the \((t + 1)\)-st penalty, at least half of the \(W_t\) total votes were cast in favor of the wrong answer. Since the total errant weight is cut in half, we find that \(W_{t + 1} \leq (3/4) W_t\). Thus, \(W_t \leq (3/4)^t n\). On the other hand, we know that some expert \(i\) errs at most \(m\) times. Therefore, \(2^{-m} \leq w_i \leq W_t\) for all \(t\). Combining this with the previous expression gives

\[

2^{-m} \leq W_t \leq (3/4)^t n.

\]

Taking logs of both sides of this expression, we find that

\[

t \leq \frac{(m + \log n)}{\log(4/3)}.

\]

It seems incredible to me that this simple and elegant algorithm gives such a powerful result. Even if we have initially no idea as to which expert is the best, we can still perform nearly as well as the best expert as long as \(m >> \log n\). (Note that \(1 / \log(4/3) \approx 2.4\).) Further, we do not even need to know what \(m\) is in advance to apply this procedure!

**Question 2** Is it possible to improve upon the bound of the weighted majority algorithm?

Daniel’s video adheres to the following principles, which I think are fairly universal in mathematical problem solving.

- Start with a concrete problem. If the problem has a nice story to go along with it, all the better. The Josephus Problem is a great example of a concrete mathematical question. Given a method by which the soldiers kill one another and the number of soldiers, where should Josephus stand to be the last living soldier?
- Formalize and generalize the problem. What is special about the number 41? The mechanism by which the soldiers kill one another works just as well for any number of soldiers, so consider the problem for \(n\) soldiers.
- Consider simpler versions of the general problem. Now that we have the general \(n\)-soldier Josephus problem, we can easily work out a few examples when \(n\) is small. To quote Polya, “If you can’t solve a problem, then there is an easier problem you can’t solve: find it.” This process of finding simpler and simpler related problems until you find one you can solve is to me the most important general problem solving method.
- Solve enough of the “simple” problems until you see a pattern. Solving the simpler problems gives one both data and intuition that will allow you to conjecture about a general solution.
- Generalize the pattern as much as you can so that it fits the examples you’ve solved. Even if the pattern doesn’t give a complete answer (for example, Daniel’s observation that if \(n\) is a power of \(2\), soldier \(1\) is the last living soldier), even a partial solution is likely valuable to understanding a complete solution.
- Prove your generalization of the pattern to obtain a solution to the general problem. Often, this doesn’t happen all at once. The Numberphile video happens to give a particularly elegant solution in a very short period of time. Don’t get discouraged when not everything falls into place the first time you try to solve the problem!
- Apply your general solution to the original concrete problem.

In my own research, I follow the strategies above. In particular, Polya’s advice regarding finding and solving simpler problems (steps 3 and 4) is maybe the most valuable single piece of problem solving advice I know of. I think math could be characterized as the art of generalizing simple observations. Often, the simple observations arise by wasting a lot of paper trying to solve simple problems.

The narrative outlined in the steps above is also valuable from a pedagogic standpoint. By starting with a tangible (if slightly morbid) problem, the student/participant immediately has some intuition about the problem before beginning formal analysis. In my experience, one of the biggest challenges students face is connecting abstract statement and theorems to concrete problems. By introducing the concrete problem first and using this problem to motivate the formal development needed to solve the problem, students can begin using their imagination earlier in the problem solving process. This makes learning more interactive, memorable, and effective.

]]>For concreteness, suppose the network is a triangle. That is, there are three nodes–say played by Alice, Bob, and Carol–and each pair of nodes can communicate. Let’s first consider some upper bounds on the number of bits sent to check that all inputs are the same. The simplest thing to do would be for each player to send their input to the other two players. After this step, everyone knows everyone else’s input, so each player can check that all of the inputs are the same. The total communication required for this protocol is \(6 n\), as each player sends their entire input to the two other players.

It is not hard to see that this procedure is wasteful. It is enough that one player–say Alice–knows whether all of the inputs are the same, and she can tell the other players as much. Thus, we can make a more efficient procedure. Bob and Carol both send their inputs to Alice, and Alice checks if both inputs are the same as her input. She sends a single bit back to Bob and Carol depending on if all inputs are equal. The total communication required for this procedure is \(2 n + 2\). So we’ve improved the communication cost by a factor of \(3\)–not too shabby. But can we do better?

Let’s talk a little bit about lower bounds. To this end, consider the case where there are only two players, Alice and Bob. As in the previous paragraph, they can check that their inputs are equal using \(n + 1\) bits of communication. To see that \(n\) bits are in fact required, we can use the “fooling set” technique from communication complexity. Suppose to the contrary that Alice and Bob can check equality with \(b < n\) bits of communication. For any \(x \in \{0, 1\}^n\), consider the case where both Alice and Bob have input \(x\). Let \(m_x\) be the ``transcript'' of Alice and Bob's conversation when they both have \(x\) as their input. By assumption, the transcript contains at most \(b < n\) bits. Therefore, there are at most \(2^b\) distinct transcripts. Thus, by the pigeonhole principle, there are two values \(x \neq y\) that give the same transcript: \(m_x = m_y\). Now suppose Alice is given input \(x\) and Bob has input \(y\). In this case, when Alice and Bob communicate, they will generate the same transcript \(m_x (= m_y)\). Since the communication transcript determines whether Alice and Bob think they have the same input, they will both be convinced their inputs are the same, contradicting the assumption that \(x \neq y\)! Therefore, we must have \(b \geq n\)--Alice and Bob need to communicate at least \(n\) bits to check equality.

To obtain lower bounds for three players, we can use the same ideas as the two player lower bound of \(n\) bits. Assume for a moment that Bob and Carol know they have the same input. How much must Alice communicate with Bob/Carol to verify that her input is the same as theirs? By the argument in the previous paragraph, Alice must exchange a total of \(n\) bits with Bob/Carol. Similarly, Bob must exchange \(n\) bits with Alice/Carol, and Carol must exchange \(n\) bits with Alice/Bob. So the total number of bits exchanged must be at least \(3 n / 2\). The factor of \(1/2\) occurs because we count each bit exchanged between, say Alice and Bob, twice: once when we consider the communication between Alice and Bob/Carol and once when we consider the communication between Bob and Alice/Carol.

As it stands we have a gap in the communication necessary to solve the problem: we have an upper bound of \(2 n + 2\) bits, and a lower bound of \(3 n / 2\) bits. Which of these bounds is correct? Or is the true answer somewhere in between? Up to this point, the techniques we’ve used to understand the problem are fairly routine (at least to those who have studied some communication complexity). In what follows–the contribution of Klim, Noga, and Benny–we will see that it is possible match the lower bound of \(3 n / 2\) bits using a very clever encoding of the inputs.

The main tool that the authors employ is the existence of certain family of graphs, which I will refer to as Rusza-Szemeredi graphs. Here is the key lemma:

Lemma (Rusza-Szemeredi, 1978). For every \(m\), there exists a tri-partite graph \(H\) on \(3 m\) vertices which contains \(m^2 / 2^{O(\sqrt{\log m})}\) triangles such that no two triangles share a common edge.

The lemma says that there is a graph with a lot of triangles such that each triangle is determined by any one of its edges. To see how this lemma is helpful (or even relevant) to the problem of testing equality, consider the following modification of the original problem. Instead of being given a string in \(\{0, 1\}^n\), each player is player is given a triangle in the graph \(H\) from the lemma. Each triangle consists of \(3\) vertices, but the condition of the lemma ensures that each triangle is determined by any two of the three vertices–i.e., a single edge of the triangle. Thus, the three players can verify that they all have the same triangle by sharing information in the following way: Alice sends Bob the first vertex in her triangle; Bob sends Carol the second vertex in his triangle, and Carol sends Alice the third vertex in her triangle.

Miraculously, this procedure reveals enough information for Alice, Bob, and Carol to determine if they were all given the same triangle! Suppose, for example, that Alice and Bob have the same triangle \(T\), but Carol has a triangle \(T’ \neq T\). By the condition of the lemma, \(T’\) and \(T\) can share at most one vertex, so they must differ in at least two vertices. In particular, it must be the case that the second or third (or both) vertices of \(T\) and \(T’\) differ. If the second vertex of \(T\) and \(T’\) differ, then Carol will see that her triangle is not the same as Bob’s triangle (since Bob sends Carol his second vertex). If the third vertex of \(T\) and \(T’\) differ, then Alice will see that her triangle differs from Carol’s, as Carol sends Alice her triangle’s third vertex.

Now consider the case where Alice, Bob, and Carol are all given different triangles \(T_a = \{u_a, v_a, w_a\}\), \(T_b = \{u_b, v_b, w_b\}\), and \(T_c = \{u_c, v_c, w_c\}\). Suppose that all three players accept the outcome of the communication protocol. This means that \(u_a = u_b\) (since Alice sends \(u_a\) to Bob), and similarly \(v_b = v_c\) and \(w_c = w_a\). In particular this implies that \(H\) contains the edges

\[

\{u_a, v_b\} \in T_b, \{v_b, w_a\} \in T_c, \{w_a, u_a\} \in T_a.

\]

Together these three edges form form a triangle \(T’\) which is present in the graph \(H\). However, observe that \(T’\) shares an edge \(\{w_a, u_a\}\) with \(T_a\), contradicting the property of \(H\) guaranteed by the lemma. Therefore, Alice, Bob, and Carol cannot all accept if they are each given different triangles! Thus they can determine if they were all given the same triangle by each sending a single vertex of their triangle, which requires \(\log m\) bits.

To solve the original problem–where each player is given an \(n\)-bit string–we encode each string as a triangle in a suitable graph \(H\). In order to make this work, we need to choose \(m\) (the number of vertices in \(H\)) large enough that there is one triangle for each possible string. Since there are \(2^n\) possible strings, using the lemma, we need to take \(n\) sufficiently large that

\[

m^2 / 2^{O(\sqrt{\log m})} \geq 2^n.

\]

Taking the logarithm of both sides, we find that

\[

\log m – O(\sqrt{\log m}) \geq \frac 1 2 n

\]

is large enough. Thus, to send the identity of a single vertex of the triangle which encodes a player’s input, they must send \(\log m\) bits, which is roughly \(\frac 1 2 n\). Therefore, the total communication in this protocol is roughly \(\frac 3 2 n\), which matches the lower bounds. We summarize this result in the following theorem.

Theorem. Suppose \(3\) players each hold \(n\)-bit strings. Then

\[

\frac 3 2 n + O(\sqrt{n}).

\]

bits of communication are necessary and sufficient to test if all three strings are equal.

The paper goes on to generalize this result, but it seems that all of the main ideas are already present in the \(3\) player case.

]]>This post is a very brief introduction to some basic concepts in probability theory. We encounter uncertainty often in our everyday lives, for example, in the weather, games of chance (think of rolling dice or shuffling a deck of cards), financial markets, etc. Probability theory provides a language to quantify uncertainty, thereby allowing us to reason about future events whose outcomes are not yet known. In this note, we only consider events where the number of potential outcomes is finite.

The basic object of study in probability is a **probability space**. A (finite) probability space consists of a (finite) **sample space** \(\Omega = \{\omega_1, \omega_2, \ldots, \omega_n\}\) together with a function \(P : \Omega \to [0,1]\) which assigns probabilities to the **outcomes** \(\omega_i \in \Omega\). The probability function \(P\) satisfies \(P(\omega_i) \geq 0\) for all \(i\) and

\[

\sum_{i = 1}^n P(\omega_i) = 1.

\]

The interpretation is that \(P\) assigns probabilities or likelihoods to the **events** in \(\Omega\).

**Example.** We model the randomness of tossing a coin. In this case, there are two possible outcomes of a coin toss, heads or tails, so we take our sample space to be \(\Omega = \{H, T\}\). Since heads and tails are equally likely, we have \(P(H) = P(T) = 1/2\).

**Example.** Rolling a standard (six-sided) die has six outcomes which are equally likely. Thus we take \(\Omega = \{1, 2, 3, 4, 5, 6\}\) and \(P(i) = 1/6\) for \(i = 1, 2, \ldots, 6\).

**Definition.** Let \(\Omega\) be a sample space and \(P\) a probability function on \(\Omega\). A **random variable** is a function \(X : \Omega \to \mathbf{R}\).

**Example.** Going back to the coin toss example above, we can define a random variable \(C\) on the coin toss probability space defined by \(C(H) = 1\) and \(C(T) = -1\). This random variable may arise as a simple game: two players toss a coin and bet on the outcome. If the outcome is heads, player 1 wins one dollar, while if the outcome is tails, player 1 loses one dollar.

**Example.** For the die rolling example, we can define the random variable \(R\) by \(R(i) = i\). The value of \(R\) is simply the value showing on die after it is rolled.

**Definition.** A fundamental value assigned to random variable is its **expected value**, **expectation** or **average**, which is defined by the formula

\[

E(X) = \sum_{i = 1}^n X(\omega_i) P(\omega_i).

\]

The expected value of a random variable quantifies the behavior we expect to see in a random variable if we repeat an experiment (e.g. a coin flip or die roll) many times over.

**Example.** Going back to the coin flip example, we can compute

\[

E(C) = C(H) P(H) + C(T) P(T) = 1 \cdot \frac{1}{2} + (-1) \cdot \frac{1}{2} = 0.

\]

This tells us that if we repeatedly play the coin flip betting game described above, neither player has an advantage. The players expect to win about as much as they lose.

For the die rolling example, we compute

\[

E(R) = R(1) P(1) + \cdots + R(6) P(1) = 1 \cdot \frac{1}{6} + \cdots + 6 \cdot \frac{1}{6} = 3.5.

\]

Thus an “average” die roll is \(3.5\) (even though \(3.5\) cannot be the outcome of a single die roll).

Often, when speaking about random variables we omit reference to the underlying probability space. In this case, we speak only of the probability that a random variable \(X\) takes on various values. For the coin flipping example above, we could have just defined \(C\) by

\[

P(C = 1) = P(C = -1) = 1/2

\]

without saying anything about the underlying sample space \(\Omega\). The danger in this view is that if we don’t explicitly define \(C\) as a function on some probability space, it may make comparison of random variables difficult. To see an example of this, consider the random variable \(S\) defined on the die roll sample space, \(\Omega = \{1,\ldots, 6\}\) by

\[

S(i) =

\begin{cases}

1 &\text{ if } i \text{ is even}\\

-1 &\text{ if } i \text{ is odd}.

\end{cases}

\]

Notice that, like our variable \(C\) defined for coin flips, we have \(P(S = 1) = P(S = -1) = 1/2\), so in some sense \(C\) and \(S\) are “the same.” However, they are defined on different sample spaces: \(C\) is defined on the sample space of coin flips, while \(S\) is defined on the sample space of die rolls.

Consider a game where the play is determined by a coin flip and a die roll. For the examples above, the random variable \(C\) depends only on the outcome of the coin flip, while \(R\) and \(S\) depend only on the outcome of the die roll. Since the outcome of the coin flip has no effect on the outcome of the die roll, the variables \(C\) and \(R\) are independent of one another, as are \(C\) and \(S\). However, \(R\) and \(S\) depend on the same outcome (the die roll) so their values may depend on each other. In fact, the value of \(S\) is completely determined by the value of \(R\)! So knowing the value of \(R\) allows us to determine the value of \(S\), and knowing the value of \(S\) tells us something about the value of \(R\) (namely whether \(R\) is even or odd).

**Definition.** Suppose \(X\) and \(Y\) are random variables defined on the same probability space (i.e., \(X, Y : \Omega \to \mathbf{R}\)). We say that \(X\) and \(Y\) are **independent** if for all possible values \(x\) of \(X\) and \(y\) of \(Y\) we have

\[

P(X = x \text{ and } Y = y) = P(X = x) P(Y = y).

\]

For our examples above with the coin flip and the die roll, \(C\) and \(R\) cannot be said to be independent because they are defined on different probability spaces. The variables \(R\) and \(S\) are both defined on the die roll sample space, so they can be compared. However, they are not independent. For example, we have \(P(R = 1) = 1/6\) and \(P(S = 1) = 1/2\). Since \(S(i) = 1\) only when \(i\) is even, we have

\[

P(R = 1 \text{ and } S = 1) = 0 \neq \frac{1}{6} \cdot \frac{1}{2}.

\]

Let \(W\) be the random variable on the die roll sample space defined by

\[

W(i) =

\begin{cases}

1 & \text{ if } i = 1, 4\\

2 & \text{ if } i = 2, 5\\

3 & \text{ if } i = 3, 6.

\end{cases}

\]

We claim that \(W\) and \(S\) are independent. This can be verified by brute force calculation. For example, note that we have \(S = 1\) and \(W = 1\) only when the outcome of the die roll is 4. Therefore,

\[

P(S = 1 \text{ and } W = 1) = \frac{1}{6} = P(S = 1) P(W = 1).

\]

Similar calculations show that similar equalities hold for all possible values of \(S\) and \(W\), hence these random variables are independent.

Given two random variables \(X\) and \(Y\) defined on the same probability space, we can define their sum \(X + Y\) and product \(X Y\).

**Proposition.** Suppose \(X, Y : \Omega \to \mathbf{R}\) are independent random variables. Then

\[

E(X + Y) = E(X) + E(Y) \quad\text{and}\quad E(X Y) = E(X) E(Y).

\]

**Proof.** For the first equality, by definition we compute

\[

E(X + Y) = \sum_{x, y} P(X = x \text{ and } Y = y) (x + y).

\]

Using the fact that \(X\) and \(Y\) are independent, we find

\[

\begin{align}

E(X + Y) &= \sum_{x, y} P(X = x \text{ and } Y = y) (x + y)\\

&= \sum_{x, y} P(X = x) P(Y = y) (x + y)\\

&= \sum_{x} P(X = x) x \sum_y P(Y = y) + \sum_y P(Y = y) y \sum_x P(X = x)\\

&= \sum_x P(X = x) x + \sum_y P(Y = y) y\\

&= E(X) + E(Y).

\end{align}

\]

The fourth equality holds because \(P\) satisfies \(\sum_x P(X = x) = 1\) and \(\sum_y P(Y = y) = 1\). Similarly, we compute

\[

\begin{align}

E(X Y) &= \sum_{x, y} P(X = x \text{ and } Y = y) x y\\

&= \sum_{x, y} P(X = x) P(Y = y) x y\\

&= \left(\sum_{x} P(X = x) x\right) \left(\sum_{y} P(Y = y) y\right)\\

&= E(X) E(Y).

\end{align}

\]

These equations give the desired results. ∎

The equation \(E(X + Y) = E(X) + E(Y)\) is satisfied even if \(X\) and \(Y\) are not independent. This fundamental fact about probability is known as the **linearity of expectation**. However, in order to have \(E(X Y) = E(X) E(Y)\), \(X\) and \(Y\) must be independent.

**Exercise.** Prove that \(E(X + Y) = E(X) + E(Y)\) without assuming that \(X\) and \(Y\) are independent.

**Exercise.** Give an example of random variables \(X\) and \(Y\) for which \(E(X Y) \neq E(X) E(Y)\). (Note that \(X\) and \(Y\) cannot be independent.

I recently came across a fantastic writeup by Will Drevo about how “audio fingerprinting” schemes are able to correctly identify a recording. Will wrote an open source Python library called dejavu for audio fingerprinting. His writeup gives a detailed overview of how the software works. Will’s explanation is a gem of expository writing about a technical subject. He gives an overview of the requisite knowledge of signal processing necessary to understand how the fingerprinting works, then describes the scheme itself in some detail. Essentially the software works in four steps:

- Build a spectrogram of a recording.
- Find “peaks” in the spectrogram.
- Identify patterns or “constellations” among the peaks which will serve as fingerprints for the recording.
- Apply a hash function to the fingerprints to compress them to a more reasonable size.

Will’s explanation of how and why these steps work to give a robust way of recognizing audio recordings is vivid and accessible. It is a remarkable narrative about how signal processing, optimization, and data structures combine to create a truly miraculous product.

]]>- Be organized. Write a brief lesson plan/list of topics on the board before class and follow it. The students rely on this structure to remind them of the big picture for the class. (
*Author’s note: Connie Chung, the undergraduate advisor in the math department said that disorganization is the most common issue that students cite in poor course evaluations. So it is unsurprising that this was the first piece of advice given by the undergraduate panel.*) - Do concrete examples, and do a lot of them. Don’t dwell too much on theory in discussion–a brief (~5 minute) review usually suffices. Students tend to have more difficulty in connecting the theory with applications than understanding the theory itself.
- Do examples you find “interesting” for upper division classes, but stick with more “standard” examples for lower division courses. Most students in lower division classes are in your discussion section to see how to solve the types of problems likely to appear on their homework/exams.
- Communicate with the instructor/lecturer for the course. Make sure that you are reviewing material that the students have seen in lecture. This is usually more helpful than giving a preview of material to come.
- Give detailed solutions or don’t bother giving a solution at all. The students want to see problems solved to completion, not just set up and cast aside. Often students struggle with the details in finishing a problem. Only when they see a full detailed solution are they confident they understand how to solve the problem.
- Gauge the level of your class. Don’t assume that because nobody asked questions about an example or topic that everyone understands it. Often, the students who don’t understand are afraid to speak up. So ask questions like, “What should I do next?” rather than “Are there any questions?” If you have to wait a while for suggestions on how to proceed, you need to spend more time on that topic.
- Use icebreakers. Get the students talking to each other and open avenues for them to interact more. This is best done at the beginning of the term to get everyone talking on day 1. (
*Author’s note: I always hated ice breakers as a student because I thought they were a waste of time. I was surprised that our panel unanimously supported them and found them valuable. I may be a convert yet…*) - Your discussion session should be informal. Students want to be comfortable asking and answering questions
- Always start the class with “easy” examples. This will make sure the students are not overwhelmed early on, and will help you diagnose problems if they students have difficulty. (See #6).
- Make your lessons interactive. If you want to solve a problem as an example, don’t jump right into the solution. Write the question on the board, and give students time to think about how to solve it. Solicit suggestions from the class on how to approach the problem, and help guide them to a solution. The students get the most out of the class when they are actively participating in the problem solving, rather than just copying down your solution. (
*This approach also helps you understand better what your students need the most help with. See #6.*) - Choose content/applications from student interest. For example, if you have a lot of physics majors in your class, they might want to see examples of how the material in your course is relevant to mechanics or electricity and magnetism. You might need to do a bit of homework yourself to find connections with biology, etc, but the students will appreciate it.
- You are never going too slow. (
*Not a single panelist had ever been in a discussion where they felt the TA was going too slowly. Not. One.*)

Looking over all of the advice and suggestions our undergraduate panel gave the new TAs in the math department, there are very few things I found surprising. Nonetheless, I thought the advice was more impactful coming from students rather than professors or TAs. Overall, the students preferred discussion sections that employed active learning. Many studies show that active learning generally leads to higher student achievement, and the anecdotal evidence from the panel shows that not only is active learning effective, but the students prefer it (at least in discussion sections). A recent series of articles/blog posts from the AMS serve as a great primer for active learning strategies in math.

]]>The following limit appeared on yesterday’s Math 32A final:

\[

\lim_{(x, y) \to (0, 0)} \frac{\sin x^2 + \tan^2 y}{x^2 + \tan y^2}.

\]

The limit seemed to cause a lot of trouble. In fact, no one in the lecture of 200 students had a perfect solution! I thought I would write up a solution here to give some closure on the problem. I will use only tools that appear in a standard calculus course: the squeeze theorem and the fact that \(\lim_{x \to 0} \sin(x) / x = 1\).

The first thing to note is that evaluating the limit along the \(x\)-axis (taking \(y = 0\)) gives a limit of \(1\):

\[

\lim_{x \to 0} \frac{\sin x^2}{x^2} = \lim_{u \to 0} \frac{\sin u}{u} = 1.

\]

This tells us that *if the original limit exists*, it must be \(1\). However, this argument does not allow us to conclude that the limit does indeed exist. Choosing any other path through \((0, 0)\) will also give a limit of \(1\), so one should anticipate that the limit exists and is equal to \(1\). We remark that, in general, \(\lim_{(x, y) \to (a, b)} f(x, y) = L\) if and only if \(\lim_{(x, y) \to (a, b)} |f(x, y) – L| = 0\). This reformulation is convenient because \(0 \leq |f(x, y) – L|\), so applying the squeeze theorem is less fussy: we only need to bound \(|f(x, y) – L| \leq g(x, y)\) where \(\lim_{(x, y) \to (a, b)} g(x, y) = 0\). Thus, we wish to show that

\[

\lim_{(x, y) \to (0, 0)} \left| \frac{\sin x^2 + \tan^2 y}{x^2 + \tan y^2} – 1\right| = 0.

\]

To this end, we compute the following bound

\[

\begin{align*}

\left| \frac{\sin x^2 + \tan^2 y}{x^2 + \tan y^2} – 1\right| &= \left| \frac{\sin x^2 + \tan^2 y}{x^2 + \tan y^2} – \frac{x^2 + \tan y^2}{x^2 + \tan y^2}\right|\\

&= \left| \frac{\sin x^2 – x^2 + \tan^2 y – \tan y^2}{x^2 + \tan y^2}\right|\\

&\leq \left| \frac{\sin x^2 – x^2}{x^2 + \tan y^2}\right| + \left| \frac{\tan^2 y – \tan y^2}{x^2 + \tan y^2}\right|\\

&\leq \left| \frac{\sin x^2 – x^2}{x^2}\right| + \left| \frac{\tan^2 y – \tan y^2}{\tan y^2}\right|.

\end{align*}

\]

The first inequality holds by the triangle inequality (\(|a + b| \leq |a| + |b|\)), while the second inequality holds because \(\tan y^2 \geq 0\) (when \(y\) is close to \(0\)) and \(x^2 \geq 0\). Since we have

\[

0 \leq \left| \frac{\sin x^2 + \tan^2 y}{x^2 + \tan y^2} – 1\right| \leq \left| \frac{\sin x^2 – x^2}{x^2}\right| + \left| \frac{\tan^2 y – \tan y^2}{\tan y^2}\right|,

\]

it suffices to show that the limit of the expression on the right is \(0\) as \((x, y) \to (0, 0)\). The expression on the right is much easier to deal with than the original limit because it is the sum of two terms, each of which only involves a single variable. Thus, one can use familiar methods from single variable calculus to compute the limits. One can show that in fact we have

\[

\lim_{x \to 0} \left| \frac{\sin x^2 – x^2}{x^2}\right| = 0 \quad\text{and}\quad \lim_{y \to 0} \left| \frac{\tan^2 y – \tan y^2}{\tan y^2}\right| = 0.

\]

The first limit follows easily from the fact that \(\lim_{x \to 0} \sin(x) / x = 1\), while the second limit requires a little more ingenuity. We compute

\[

\begin{align*}

\lim_{y \to 0} \left| \frac{\tan^2 y – \tan y^2}{\tan y^2}\right| &= \left| \lim_{y \to 0} \frac{\tan^2 y}{\tan y^2} – \lim_{y \to 0} \frac{\tan y^2}{\tan y^2} \right|\\

&= \left| \lim_{y \to 0} \frac{\tan^2 y}{\tan y^2} – 1 \right|\\

&= \left| \lim_{y \to 0} \frac{\tan^2 y}{y^2} \frac{y^2}{\tan y^2} – 1 \right|\\

&= \left| \lim_{y \to 0} \left(\frac{\tan y}{y}\right)^2 \frac{y^2}{\tan y^2} – 1 \right|\\

&= | (1)^2 (1) – 1 |\\

&= 0.

\end{align*}

\]

The penultimate equality follows from the fact that \(\lim_{y \to 0} \tan(y) / y = 1\), which can be deduced by writing \(\tan y = (\sin y) / (\cos y)\) and using the familiar \(\sin(y) / y\) limit identity (or an application of L’Hopital’s rule). At any rate, we may now deduce from the squeeze theorem that

\[

\lim_{(x, y) \to (0, 0)} \frac{\sin x^2 + \tan^2 y}{x^2 + \tan y^2} = 1.

\]

Whew.

Recently, the brilliant mathematician John Nash and his wife were killed in a car crash. While Nash was probably most famous for his pioneering work on game theory and his portrayal in Ron Howard‘s popular film A Beautiful Mind, he also worked in the field of cryptography. Several years ago, the NSA declassified several letters from 1955 between Nash and the NSA wherein Nash describes some ideas for an encryption/decryption scheme. While the NSA was not interested in the particular scheme devised by Nash, it seems that Nash foresaw the importance of computational complexity in the field of cryptography. In the letters, Nash states:

Consider the enciphering process with a finite “key,” operating on binary messages. Specifically, we can assume the process [is] described by a function

\[

y_i = F(\alpha_1, \alpha_2, \ldots, \alpha_r; x_i, x_{i-1}, \ldots, x_{i-n})

\]

where the \(\alpha\)’s, \(x\)’s and \(y\)’s are mod 2 and if \(x_i\) is changed with the other \(x\)’s and \(\alpha\)’s left fixed then \(y_i\) is changed. The \(\alpha\)’s denote the “key” containing \(r\) bits of information. \(n\) is the maximum span of the “memory” of the process……We see immediately that

in principlethe enemy needs very little information to break down the process. Essentially, as soon as \(r\) bits of enciphered message have been transmitted the key is about determined. This is no security, for a practical key should not be too long. But this does not consider how easy or difficult it is for the enemy to make the computation determining the key. If this computation, although always possible in principle, were sufficiently long at best the process could still be secure in a practical sense.

Nash goes on to say that

…a logical way to classify the enciphering process is the way in which the computation length for the computation on the key increases with increasing length of the key… Now my general conjecture is as follows: For almost all sufficiently complex types of enciphering…the mean key computation length increases exponentially with the length of the key.

The significance of this general conjecture, assuming its truth, is easy to see. It means that it is quite feasible to design ciphers that are effectively unbreakable.

To my knowledge, Nash’s letter letter is the earliest reference to using computational complexity to achieve practical cryptography. The idea is that while it is theoretically possible to decrypt an encrypted message without the key, doing so requires a prohibitive amount of computational resources. Interestingly, Nash’s letter predates an (in)famous 1956 letter from Kurt Godel to John von Neumann which is widely credited as being the first reference to the “P vs NP problem.” However the essential idea of the P vs NP problem is nascent in Nash’s conjecture: there are problems whose solution can efficiently be verified, but finding such a solution is computationally intractable. Specifically, a message can easily be decrypted if one knows the key, but *finding* the key to decrypt the message is hopelessly difficult.

The P vs NP problem was only formalized 16 years after Nash’s letters by Stephen Cook in his seminal paper, The complexity of theorem-proving procedures. In fact, Nash’s conjecture is strictly stronger than the P vs NP problem–its formulation is more akin to the exponential time hypothesis, which was only formulated in 1999!

Concerning Nash’s conjecture, he was certainly aware of the difficulty of its proof:

The nature of this conjecture is such that I cannot

proveit, even for a special type of cipher. Nor do I expect it to be proven. But this does not destroy its significance. The probability of the truth of the conjecture can be guessed at on the basis of experience with enciphering and deciphering.

Indeed, the P vs NP problem remains among the most notorious open problems in mathematics and theoretical computer science.

]]>There are a couple questions from the homework that seem to have given people (myself included) a fair amount of trouble. Since I wasn’t able to give satisfactory answers to these questions in office hours, I thought I’d write up clean solutions to the problems, available here. The questions both involve projections \(E : V \to V\) where \(V\) is an inner product space. The problems ask you to prove:

- If \(E\) is idempotent (\(E^2 = E\)) and normal (\(E^* E = E E^*\)), then \(E\) is self-adjoint (\(E = E^*\)).
- If \(E\) satisfies \(\|E v\| \leq \|v\|\) for all \(v \in V\), then \(E\) is an orthogonal projection onto its image.