Monthly Archives: November 2012

cooking

Pizza!

For the first time in a while, I made pizza from scratch tonight. I used my usual recipe for 4 pizzas:

  • 20 oz flour (tonight I used 14 oz white and 6 oz whole wheat)
  • 14 oz water at about 110 F
  • 1 packet of yeast
  • 3/4 tsp salt
  • 1/2 tsp dried rosemary

As usual, I mix all of the dry ingredients (including the yeast) and add the water, mixing with a spoon until well-combined. Then I knead the dough on a floured surface for a solid 8 minutes until the dough had a nice elasticity. After letting the dough rest for a little while, I cut it into 4 pieces and formed the pieces into balls. Then I set the dough aside to ferment, while I start the sauce.

I use a very simple sauce recipe:

  • 1 26 oz box of crushed tomatoes
  •  a few tablespoons of olive oil
  • 6 cloves of garlic, quartered
  • 2 dried red chilies, crushed
  • salt and pepper to taste
  • any fresh herbs I need to use (basil and oregano are nice)

This makes enough sauce for 6 or more pizzas, but it freezes well. I add all of the ingredients to a sauce pan and let it simmer (covered) for a couple hours, then blend it with an immersion blender.

When the sauce is simmering after the dough has doubled in bulk, I form the dough balls into disks about a half inch thick and let them sit for a while.  Now is a good time to start preheating the oven to the highest possible temperature setting (in my case 500 F). Following a hint from Peter Reinhart (author of this bible of bread baking) I decided to put my pizza stone directly on the bottom of the oven instead of on a cooking rack. This is also a good time to disconnect any over-sensitive smoke detectors.

Once the dough rounds puff up a bit, I form them into pizza shapes while I wait for the oven to heat up. Once everything is preheated and ready to go, I gently transfer the first dough round to a peel dusted with corn meal. I top it with a few dollops of sauce, a healthy dose of shredded mozzarella and any other toppings that need to be cooked. Then into the oven for not more than seven or eight minutes.

The result is this:

Before today, the biggest problem I had with the pizza was that the crust on the bottom wouldn’t get that almost-burnt caramelization you see on pizza from a wood burning oven. But placing the pizza stone directly on the bottom of my oven seemed to do the trick:

Nice and crispy but not quite burnt! Peter Reinhart has never steered me wrong… Right after coming out of the oven, I put a couple handfuls of arugula dressed in olive oil and balsamic vinegar on the pizza, followed by the obligatory grated parmesan cheese:

The verdict: delicious. Never again is my pizza stone sitting on a rack above the bottom of the oven. At least not for pizza.

adventures

View from the Edgewater

20121125-091055.jpg

This view from my hotel room combined with the ocean scent and the sounds of fog horns and seagulls is really making me nostalgic for the Pacific Northwest.

computer science math

Longest Increasing Subsequence


Consider a sequence \(S = (s_1, s_2, \ldots, s_n)\) of elements from the set \([m] = \{1, 2, \ldots, m\}\). A subsequence \(\sigma\) of \(S\) is a sequence

$$\sigma = (s_{i_1}, s_{i_2}, \ldots, s_{i_{n’}})$$

where \(i_1 < i_2 < \cdots < i_{n’}\). The subsequence is (weakly) increasing  if

$$s_{i_1} \leq  s_{i_2} \leq \cdots \leq  s_{i_{n’}}.$$

Similarly, the sequence is strictly increasing if we replace the inequalities above with strict inequalities. The (length of the) longest increasing subsequence (LIS) of the sequence \(S\) (denoted \(\ell(S)\)) is the length of the longest subsequence that is increasing. That is the LIS is largest number \(n’\) such that there exist indices \(i_1 < i_2 < \cdots < i_{n’}\) such that (2) holds.

The LIS gives a natural measure of the “sortedness” of a sequence. If \(\ell(S) = n = |S|\), then the sequence is sorted (in increasing order). In general \(n – \ell(S)\) is number of deletions/insertions needed to sort the sequence. To see that \(n – \ell(S)\) deletions/insertions suffice, consider the following sorting algorithm: fix an increasing subsequence \(\sigma\) of length \(\ell(S)\). For each \(s \in S \setminus \sigma\), delete \(s\) from the sequence and insert it back into the sequence so that it is consistent with \(\sigma\). That is, find \(j \leq \ell(S)\) such that \(s_{i_j} \leq s \leq s_{i_{j+1}}\) and insert \(s\) into the sequence between \(i_j\) and \(i_{j+1}\). The resulting sequence \(S’\) satisfies \(\ell(S’) = \ell(S) + 1\) as the subsequence \(\sigma\) with \(s\) inserted is an increasing subsequence of length \(\ell(S) + 1\). Repeating this process \(n – \ell(S)\) times results in a sorted sequence. On the other hand, \(n – \ell(S)\) deletions/insertions  are necessary because removing and re-inserting a single element can only change the length of the longest increasing subsequence by at most \(1\).

Given a sequence \(S\), \(\ell(S)\) can be efficiently computed by patience sorting. The algorithm was devised as a way to sort a deck of cards, so imagine that the sequence \(S\) is represented by an ordered deck of cards; i.e., \(s_1\) is the top card on the deck, \(s_2\) is second, and so on. When we are dealt the first card, we start a pile with that card face up. This first pile is the left-most pile — as we add new piles they will all be added to the right of any existing piles. In all piles, only the top card is visible. When we are shown subsequent cards, we place them (face up) according to the following rules:

  1. A card cannot be placed on top of a card of smaller or equal value;
  2. When a card is dealt, it is placed on top of the left-most pile whose top card is larger than the card that was dealt;
  3. If no such pile exists (that is, the dealt card is larger than the top cards of all piles) then the card is placed in a new pile to the right of the existing piles.

It is easily verified that the piles have the following properties: First each pile is increasing from top to bottom: the smallest card is on top, followed by the second smallest, and so on. Second, the top cards of the piles form an increasing sequence from left to right. Notice that the top cards do not necessarily form an increasing subsequence of the original sequence, the top cards of the piles need not have been dealt in order from left to right.

Once all of the cards have been dealt, we claim that the number of piles is the length of the longest increasing subsequence \(\ell(S)\). Let \(p\) be the number of piles at the end of the algorithm. To show that \(p = \ell(S)\), we will prove the following facts:

  1. If \(S\) contains an increasing subsequence of length \(k\) then there will be at least \(k\) piles at the end of the algorithm (hence \(p \geq \ell(S)\))
  2. \(S\) contains a subsequence of length \(p\) (hence \(p \leq \ell(S)\)).

Combining 1 and 2 gives \(p = \ell(S)\) as desired.

To prove 1, suppose \(s_{i_1} \leq s_{i_2} \leq \cdots \leq s_{i_k}\) is an increasing subsequence. When we are dealt \(s_{i_1}\) it gets put on top of some pile. Since only smaller smaller cards can subsequently be put on top of \(s_{i_1}\), when we are dealt \(s_{i_2} \geq s_{i_1}\), it must be placed on a pile to the right of the pile containing \(s_{i_1}\). Similarly, each \(s_{i_{j+1}}\) must be put in a pile to the right of \(s_{i_j}\) so that at the end of the algorithm, the elements \(s_{i_1}, s_{i_2}, \ldots, s_{i_k}\) reside in different piles. In particular, there must be at least \(k\) piles.

To prove 2, we argue by induction on \(p\). If \(p = 1\), then \(S\) is nonempty, so it trivially contains an increasing subsequence of length \(1\); the top card on the pile \(s_{i_1}\) is itself an increasing subsequence of length \(1\).  For the inductive step let \(s_{i_p}\) be the top card of the right-most pile. We claim that \(S\) contains an increasing subsequence of length \(p\) ending with \(s_{i_p}\). To see this, recall that in order for \(s_{i_p}\) to have been placed in the \(p\)-th pile (i.e., the right-most pile), there must have been a smaller on top of the \((p-1)\)-st pile when \(s_{i_p}\) was dealt. Call this card \(s_{i_{p-1}}\). By induction, when \(s_{i_{p-1}}\) was dealt, there was an increasing subsequence of length \(p-1\) ending with \(s_{i_{p-1}}\). Since \(s_{i_p} \geq s_{i_{p-1}}\) and \(s_{i_p}\) was seen after \(s_{i_{p-1}}\), appending \(s_{i_p}\) to the increasing subsequence yields an increasing subsequence of length \(p\).

Patience sorting runs in nearly linear time and uses nearly linear space. It turns out that for exact computation of the (length of the) LIS, patience sorting is essentially time and space optimal. Further, patience sorting is a streaming algorithm: the sequence is processed as the cards are dealt, and we do not require oracle access to the sequence. If we we only wish to approximate \(\ell(S)\), we can do better than the linear patience sorting.

If we grant oracle access to the sequence \(S\), there is a randomized algorithm that runs in time \(\log^c n\) that gives an approximation of \(\ell(S)\) up to an additive error of \(\delta n\) for any fixed \(\delta > 0\). The error of \(\delta n\) likely could not be improved for such a fast algorithm, as it seems that no algorithm could distinguish \(S\) with \(\ell(S) = 1\) from \(\ell(S) = n^{1 – \epsilon}\) for \(\epsilon > 0\) using only \(\log^c n\) samples from \(S\).

In the context of streaming algorithms, a modification of patience sorting using \(O(\sqrt{n})\) space gives a \((1 + \epsilon)\) multiplicative approximation of \(\ell(S)\). The algorithm is a deterministic memory-limited version of patience sorting. Further, it was proven that  \(O(\sqrt{n})\) space is optimal for deterministic approximation streaming algorithms. However, the proof of the lower bound for space complexity does not apply to randomized streaming algorithms. It seems plausible that a streaming algorithm using polylogarithmic space could give a good approximation of \(\ell(S\), but to my knowledge no one has yet devised such an algorithm (or shown a lower bound to the contrary). So this is what I have been working on lately.

brewing

Brewtarget

For the last couple batches of beer that I’ve brewed (since starting to go all grain), I’ve been using the Brewtarget software to keep my recipes in order. Brewtarget keeps a library of recipes for home brewing. Additionally, it handles all of the calculations (target gravity, bitterness, ABV, etc) and even generates instructions for brew day. I’ve found it extremely helpful. Plus it is open source, which is always a bonus.

brewing

Finnegan Stout (v 0.1)

This past weekend, I brewed another batch of beer so that I would have something to share during the winter holidays. By popular demand, I decided to brew a stout. I wanted to experiment with my own recipe, so I did some research on stouts. By far the most useful resource was the book Designing Great Beers by Ray Daniels. The book goes into great depth about many popular styles of beer and gives a wonderful historical perspective on the styles. It doesn’t contain explicit recipes, but rather serves as a guide to develop one’s own recipes. I highly recommend this book.

I wanted a stout that was a hybrid between a classic dry stout and the bolder American stouts being brewed on the West Coast these days (Bear Republic’s Big Bear Black Stout is my favorite of these right now). I’m hoping that the finished product will be a slightly dryer, lower gravity version of the American stouts I like so much.

Just about every stout uses a pale malt for its base and roasted barley that gives it its characteristic color and rich flavor. In addition to these grains, I decided to use Caramel 60 malt to add some caramel and nutty flavors, black malt for color and burned bitterness, and flaked barley for body and head retention. Typical dry stouts only use a single bittering hop addition, but I wanted to have some hop flavor and aroma in addition to the bittering hops. I decided to use the hops in Big Bear Black Stout: Centennial for bittering and Cascade for flavor. Hopefully these will give the beer a decidedly West Coast flavor profile.

Once I decided on ingredients and ratios, the only thing that remained was choosing a name. I thought naming it after my dog Finnegan was apropos: his pedigree is English, but he is pure Californian mutt. So Finnegan Stout it is. Here is the recipe for my 5.5 gallon batch:

Grain Bill

  • 8 lb pale malt
  • 1 lb Caramel 60
  • 0.5 lb flaked barley
  • 0.5 lb roasted barley
  • 0.5 lb black malt

Hops

  • 1 oz Centennial (10% AA) 60 minutes
  • 1 oz Cascade (6% AA) 15 minutes

Yeast

I performed a single temperature infusion at 152 F for 60 minutes, followed by a batch sparge at 165 F, for a total of 6.5 gallons in the brew pot at a gravity of 1.040. After the hour-long boil, I had around 5.5 gallons of wort at a gravity of 1.045. So my extraction hit my targets assuming 70% efficiency! I am expecting a final gravity of around 1.010 for a final ABV of about 4.5% and a bitterness of 40 IBU. I’m planning on letting the stout ferment in the primary for three weeks, then straight into the bottle. It should be ready to drink just in time for winter break.

Update On Friday, I bottled the batch of Finnegan Stout. It yielded 49 bottles. I measured a final gravity of 1.011, corresponding to an ABV of 4.4%. I snuck a bit of a taste, and though tepid and flat, I thought it tasted pretty promising. It actually smelled a lot like Rogue’s Chocolate Stout. If it ends up to be anything like that, I will be very pleased.

teaching

Midterm II Review

Here are some examples that I will go over in the midterm review. If you notice any typos or errors, feel free to let me know in the comments. You can add math symbols to your comments using LaTeX syntax surrounded by dollar signs. For example, the line

D\lim_{x \to \infty} f(x) = \frac{1}{2}D

with the D’s replaced by dollar signs becomes

$\lim_{x \to \infty} f(x) = \frac{1}{2}$.

brewing musings

Folklore, Brewing, and Science

I recently came across a piece of brewing folklore that I found quite interesting. Allegedly, when Norse families would brew beer, they had special sticks that they used to stir the wort. They ascribed magical powers to the stirring sticks, as using the family’s stirring stick would ensure that the beer would turn out well. The stirring sticks were thus treated as family heirlooms.

With the hindsight of a couple hundred years of progress in microbiology, we have a scientific explanation for what was going on: the stirring sticks harbored a colony of yeast that inoculated the wort, ensuring that a desirable fermentation occurred.  This method of action, however, was (presumably) completely unknown to the Norsemen. Nonetheless, their method worked. They identified the pattern that using the same stirring stick over and over yielded better beer, developed a theory for the method of action (magic), and as a result had consistent beer.

Although I can’t find a good reference for this piece of folklore (see herehere and here), I think the story is an interesting parable for scientific discovery. Brewers knew how to successfully make beer centuries before a “scientific” explanation of their methods was discovered. But surprisingly little has changed in how beer is actually brewed. Our understanding of the biology and chemistry of brewing has allowed us more control over the finished product, yet the basics of brewing are essentially the techniques discovered over two millennia of trial and error.

In a sense, the scientific method is little more than an explicit protocol for how to use the trial–and–error method effectively. We observe some physical phenomena, guess at a method of action (or theory), deduce from that theory some predicted phenomena, then check if we observe the predicted the phenomena. By iterating this process and refining our hypotheses, we tend to end up with theories that explain a lot of observable phenomena. In this sense, the scientific method is extraordinarily pragmatic.

The scientific method is, I believe, limited in that it does not tell us why nature behaves as it does. There is no reason to believe that a formal theory has any relationship to reality except by analogy — a theoretical prediction corresponds to an experimental measurement. The relative merits of competing theories must be judged on a pragmatic basis. The modern science of zymurgy offers a theoretical framework which gives a brewer great control over their beer. However, whether one ascribes a successful brew to magic, cavorting wee beasties or complex chemical reactions, the methods of brewing don’t seem to be altered in a fundamental way. It seems to me that in many ways, science has changed the way we conceive of brewing more than the way we brew.

Update I realized after reading over this post that it came off as being too critical of science. That wasn’t really my intention in writing the post. Instead, I just wanted to point out that humans tend to be pretty good at recognizing patterns even if we have no clue what the mechanism behind the pattern is. The advantage of the scientific method is that it gives a hypothetical mechanism that accounts for the observed pattern. The hypothetical mechanism may also elucidate other patterns that were not so apparent on their own. In the context of brewing, the mechanism is fermentation by yeast. So in order to ensure a good brew, I should do everything in my power to keep the (good) yeast happy while trying to stifle their competition. Thus science suggests I should sanitize my equipment before putting chilled wort into it, I should make sure the yeast have the nutrients they need to thrive, and I should inoculate the wort with a strain of yeast I know will give good results. My only recourse if I adhere to the “magic stick” mechanism of brewing is to stir and pray. Certainly I would never suggest this as the most viable way to get a consistent brew.

I think the point I was trying to make towards the end of the post is that science only gives a hypothetical mechanism of action that is, a priori, no better than any other explanation of the pattern. The crucial feature of a scientific hypothesis is that it can be disproved: if I hypothesize that yeast turn wort into beer, then you could disprove my hypothesis by showing that some wort turned to beer and throughout the process no yeast were present.

Nonetheless, I want to draw a distinction between theorized mechanisms of action and what might be called physical mechanisms (i.e., the way nature actually works). The difference between these views is that of cause and effect, correlation and causality. Nature doesn’t obey Newton’s (or anyone else’s) laws because these laws accurately describe the way nature works; rather Newton’s laws exist in their present formulation because they accurately predict patterns we see in nature. But that doesn’t mean that competing theories could not be just as successful or that the formalism in which Newton’s laws are expressed has any bearing on why nature behaves as she does. The reason that I think it may be worth belaboring this distinction is that if we only view nature through the lens of some formalism (like classical mechanics, quantum mechanics or general relativity) we potentially miss out on other completely different ways of describing nature that could ultimately be fruitful. I think this idea is implicit (if not explicit) in, for example, Wolfram’s tome A New Kind of Science, although that book is mired in controversy for other reasons.

Ultimately, my point is this: just because a physical theory has been immensely successful in predicting physical phenomena does not mean that it is “correct” in any philosophical or metaphysical sense. More pragmatically, two competing theories can be theoretically incompatible with one another yet still give good (perhaps complimentary) predictions about how nature behaves, so it may be worthwhile to support seemingly contradictory theories. To end with a trite cliche, perhaps it is best if we don’t put all of our eggs into one (theoretical) basket.