You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

Fluency, Fluidity, Symbology, Complexity, Difficulty. Speed Bumps.

I have some thoughts I’d like to share on why it might be that so many people experience math as so difficult, even though at bottom it’s just nothing but logic and rule-following. I can identify two factors that are present in those pieces of math that I experience as more difficult than other pieces, and that cause me to slow down. I don’t know that this extends beyond my own experience, but I’ve consistently found that more of my life is generalizable than I’d previously believed, so I’ll run with it for now. (It’s not as if I’m claiming to be scientific here.)

1. The first is symbology. I notice that it takes me noticably longer and takes me several more mental steps to understand a piece of information that is presented in math symbols than one that is presented in words. This is true not only for complicated stuff, but for simple stuff too. It takes me less time and effort to understand “a is greater than b” than to understand “a > b” — seriously, even for examples that simple. I have to pause for a brief moment and consciously remember the old elementary school mnemonic “ok, it eats the bigger one. That means a is bigger than b.” I’m not fluent enough to use the symbols fluidly. I have to mentally translate them to the language in which I am fluent, English. For symbols that I didn’t actually learn in elementary school, it’s harder. For example, I sometimes have to go back and look up whether it’s the brackets that mean a closed interval and the parentheses that mean an open interval, or the other way around. I just have trouble remembering that.

I know this isn’t just me, because my mother reports a very similar experience, not with math, but with directions. For some reason, even though she is a very intelligent woman (she did produce me, you know) she has an extra mental step to remember “left” and “right.” If you’re giving her directions on the fly in the car, and she needs to turn left right now (as opposed to, say, in a block), you’d better say “turn your way” rather than “turn left,” or the split second it takes her to associate the signifier “left” with the signified direction will cause the turn to be missed. Same with me and math symbols.

I’m not sure from whence this problem comes. Obviously, self-interest and self-image forbid me from attributing it to general intelligence, so I’ll go with practice. It seems likely to me that people who do math every day for an extended period (like math majors, for example) will eventually develop a natural fluency in the language in the same way that people who speak a normal language do. Immersion. Has anyone done a comparative study of language acquisition and mathematical acquisition? If there are any psychologists or linguists reading this thing and doesn’t think it’s been done, talk to me, we’ll collaborate or something.

2. I think the second issue is complexity, expressed in terms of the number of memory registers a given discrete object of learning occupies. On those occasions when I struggle with an item and eventually figure it out, I often find that the thing that kept me from figuring it out in the first place is that I failed to take into account the effect of a piece of information that was presented right up at the start, but that I forgot in the process of working through the rest of the details.

Here’s an example. I recently had a little trouble with some theorem about integrating symmetrical functions depending on whether they’re even or odd. The proof operated by dividing the function into two seperate sides, representing the function for the part of the curve on the left side of the origin and the function for the right side. The one for the left side used the variable u, which was defined as -x, and the one for the right side used the variable x. The very last step of the proof combined the integral of the left side and the integral of the right side into 2 times the integral of the right side. That drove me crazy, because of the u being defined as -x. I thought it required one to take the variable u, which one had previously defined as -x, and substitute it for x. Clearly, -x isn’t ordinarily substitutible for x, except where x is zero, and it was driving me up the wall.

It finally hit me as I was driving to work one day. “Wait a minute. These are symmetrical functions. The integral of the function with u represents the area under the curve on the left side of the origin, and the integral of the function with x represents the area under the curve on the right side of the origin. Of course they are equal, by definition! (Note to textbook publishers: for heaven’s sake, would you please put a margin note in places like this?) That’s why the theorem is limited to symmetrical functions. In my struggles with the proof, I’d simply failed to remember and apply the basic piece of information that defined the class of functions to which it applied.

Similarly, I’ve noticed that the more variables and different techniques work their way into a proof or into a method, the more difficult it is. Not because each can’t be easily applied in isolation, but, I suspect, because it simply occupies more memory registers to apply them all at once. It requires more concentration. (In a related note, I’ve noticed that I can’t listen to music while working on the hardest bits.)

Again, I suspect this is related to practice. There are a lot of things that one can do unconsciously after long practice so that the technique doesn’t have to be in working memory, as such, when one does it. For example, when you start to drive, you have to consciously think about when one starts to do a left turn. After you’ve been driving for a while, you no longer do so. I suspect the same applies to, e.g., applying the chain rule.

Extrapolating from my experience, I suspect that other people experience these same things as requiring more mental resources. Particularly to the extent these things require mental time to work through, I can see how people get more and more behind. They sit in a math class and hit one of these bumps, and they don’t get enough time in the class to process it before the instructor moves on, for example. Suddenly, they’re behind and they don’t have the self-awareness to realize that they need to spend the time out of class working through the speed bump in order to catch up. So the problem cascades.

Perhaps? Or perhaps I’m just repeating things that math instructors (not to say psychologists) have known for generations? I don’t know. But it’s interesting to me to make sense of my own experience this way, at least.

More on the two-envelope thing

David Chalmers (insane brilliant philosopher of mind in Austrailia) has apparently written not just one paper on the two-envelope thing, but two papers on the two envelope thing! (Cross-reference: How to make people crazy with probability.)

I have it on good authority from my Mathematician Friend (TM) that the answer is that there’s no probability distribution on the real numbers. I suspect he wanted to add “so shut up about it already!!”

Motivation, AND: Calculus in English — the chain rule

One very good reason to start a math blog is that it motivates you to crack the math book slightly more often, simply to have something to blog about. Because if you don’t blog frequently, people won’t keep coming back, and nobody will add you to their blogrolls, and you will die at age 30, alone, cold, wet, and poor in a back alley in Düsseldorf.

So the other major bit of calculus so far that took me inordinately long to parse in text because of the sheer number of symbols boomeranging about was the chain rule. As before, here is my (no doubt amusingly wrong) attempt to translate it into English. Here be a good online symbol-version. Here’s another. Hopefully one of those will make sense to you. (It mildly squicks me to link the second version, as I seem to have dated one of the people on that project who may have created it. This is, like, math-cest. Ewww. This TMI comes to you courtesy of The Blogosphere, Bringing You Irrelevant Personal Revelations Since 2003(r)).

So, ANYWAY! You have zis function, and you want to take its derivative. Only, it’s a little messy (clearly the root of all evil). Lucky for you, it can be written as one function inside another function. For example, consider the function F(x) = (x2 + πx)4. This can be seen as one function — g(x) = x2 + πx inside a second function f(u) = u4.

(Isn’t it interesting how whenever mathematicians want to notate a second function for substitution purposes — in this case replacing the g function with a variable to show how it fits into the f function — they name the big function f(u)? Is this some kind of latent resentment at the world because physics gets all the grant money?)

So here’s what you do to get the derivative of this sucker. You take the derivative of the outside function — f(u) and then replace the u with the inside function, without taking the derivative of the inside function. In the example, f'(u) = 4u3 by the power rule, and that is expanded to 4(x2+πx)3. Then you multiply that result by the derivative of the inside (g, or u) function. Which, of course, is g'(x) = 2x+π. So the final derivative of the whole big function is F'(x) = 4(x2+πx)3(2x+π). Shazam!

Yes, I just put pi in this one so I could play with HTML math symbols.

EDIT: Ok wordpress, get the line breaks right. I do not want to hand-write the HTML, but I will if you cross me.

SECOND EDIT: I’m serious here. Stop removing my paragraph tags. Stop it. Now. NOW.

How to make people crazy with probability.

There’re a bunch of probability puzzles that seem to drive people crazy even today. Beyond the classic Monty Hall problem, there’s the puzzle about picking between two envelopes, where one has twice the amount of money as the other. The second of those problems gave rise to over 300 debating comments in its recent appearance in the Volokh Conspiracy. It is also broken down here.

If you ever want to really bust up a party full of people who are intelligent and aggressive, but don’t know probability, try that one. It’s guaranteed to turn a group of otherwise normal people into snarling math-maniacs.

A cool article from a reader

The best part, I think, of having a blog is that people read the blog. And then they read other cool things. And then they send you those cool things. And then you get to post them. And then more people read the blog. And they send more cool things. Repeat, ad infinitum, until all the cool things in the entire universe flow directly into your e-mail inbox.

To start off this victorious cycle, reader Matt sends in this article, which proves that the corporate media is good for something after all. Excerpts to follow:

The poet Jan Zwicky once wrote, “Those who think metaphorically are enabled to think truly because the shape of their thinking echoes the shape of the world.”

Zwicky, whose day job includes teaching philosophy at the University of Victoria in British Columbia and authoring books of lyric philosophy such as Metaphor & Wisdom, from which the above quotation was taken, has lately directed considerable attention to contemplating the intersection of “Mathematical Analogy and Metaphorical Insight,” giving numerous talks on the subject, including one scheduled at the European Graduate School in Switzerland next week.

Casual inquiry reveals that metaphor, and its more common cousin analogy, are tools that are just as important to scientists investigating truths of the physical world as they are to poets explaining existential conundrums through verse. A scientist, one might liken, is an empirical poet; and reciprocally, a poet is a scientist of more imaginative and creative hypotheses.

* * *

“Mathematicians don’t talk a lot about analogy in mathematics,” says Simon Kochen, Henry Burchard Fine professor of mathematics at Princeton. “Not because it isn’t there, but just the opposite. It permeates all mathematics. It is pervasive. It’s a powerful engine for new mathematical advances.”

According to Kochen, the modern mathematical method is that of axiomatics — rooted abstraction and analogy. Indeed, mathematics has been called “the science of analogy.”

“Mathematics is often called abstract,” Kochen says. “People usually mean that it’s not concrete, it’s about abstract objects. But it is abstract in another related way. The whole mathematical method is to abstract from particular situations that might be analogous or similar (to another situation). That is the method of analog.”

This method originated with the Greeks, with the axiomatic method applied in geometry. It entailed abstracting from situations in the real world, such as farming, and deriving mathematical principles that were put to use elsewhere. Eratosthenes used geometry to measure the circumference of the Earth in 276 BC, and with impressive accuracy.

In the lexicon of cognitive science, this process of transferring knowledge from a known to unknown is called “mapping” from the “source” to the “target.” Keith Holyoak, a professor of cognitive psychology at UCLA, has dedicated much of his work to parsing this process. He discussed it in a recent essay, “Analogy,” published last year in The Cambridge Handbook of Thinking and Reasoning.

“The source,” Holyoak says, providing a synopsis, “is what you know already — familiar and well understood. The target is the new thing, the problem you’re working on or the new theory you are trying to develop. But the first big step in analogy is actually finding a source that is worth using at all. A lot of our research showed that that is the hard step. The big creative insight is figuring out what is it that’s analogous to this problem. Which of course depends on the person actually knowing such a thing, but also being able to find it in memory when it may not be that obviously related with any kind of superficial features.”

In an earlier book, Mental Leaps: Analogy in Creative Thought, Holyoak and co-author Paul Thagard, a professor of philosophy and director of the Cognitive Science Program at the University of Waterloo, argued that the cognitive mechanics underlying analogy and abstraction is what sets humans apart from all the other species, even the great apes.

Cool beans. Although I’m a little surprised that the author didn’t mention the classic math/science metaphor, Einstein’s beam of light. So this is where you, the reader, posts your favorite math/science metaphors in the comments.  Or, if you disagree with the article, comment and say why.  You know the drill.

Uh-oh, here’s another “paradox” coming up

Will this be another 300 comment post? At Marginal Revolution, there’s a discussion of the Tullock Lottery. The summary from wikipedia is as follows:

The setup involves an auctioneer who volunteers to auction off a dollar bill with the following rule- the dollar goes to the highest bidder, who pays the amount he bid. The second-highest bidder also must pay the highest amount that he bid, but gets nothing in return. Suppose that the game begins with one of the players bidding 1 cent, hoping to make a 99 cent profit. He will quickly be outbid by another player bidding 2 cents, as a 98 cent profit is still desirable. However, a problem becomes evident as soon as the bidding reaches 99 cents. Supposing that the other player had bid 98 cents, they now have the choice of losing the 98 cents or bidding a dollar even, which would make their profit zero. After that, the original player has a choice of either losing 99 cents or bidding $1.01, and only losing one cent. After this point the two players continue to bid the value up well beyond the dollar, and neither stands to profit.

Here’s my two cents. I don’t think it’s really a paradox. Some chap named Keith says the following:

If there is no equilibrium, avoiding the game altogether is
not an equilibrium, either. After all, if nobody bids at
all, then you should a penny and win.

It’s a paradox. Playing is a mistake. If nobody plays,
then not playing is a mistake, too.

But that can’t be right. If we assume complete information and rationality on behalf of the bidders, any given bidder A will know the extent of the universe of other potential bidders. So A’s utility calculuation while considering her first bid will be as follows. If she fails to bid, her expected gain is zero. If she does bid, then she must assume that everyone else has exactly the same utility calculation, and conclude that her expected gain is – all her money.

Can someone who knows more game theory than I chime in here? (Is anyone actually reading this blog yet?) I don’t think I understand how nonparticipation is not an equilibrium. The claim that there is no equilibrium seems to rely, sub rosa, on each actor’s not having access to the utility calculations undertaken by the others.

Indeed, for once it looks like wikipedia gets it right. From the wikipedia page again:

The actual expected value of bidding again is not Zero cents due to the unterminated nature of the game; the value of the bid is actually zero cents multiplied by the possibility of the other player giving up at that point, added to the value of losing two cents multiplied by the probability of the other player giving up at that point, in an infinite series with unbounded loss.

Exactly! The operative phrase there is “multiplied by the possiblity of the other player giving up at that point.” Which is zero, for the reasons given. If A has no reason to give up, then neither does the other player. It is concluded that the expected value of bidding is negative at all points. I’d suspect the experimental results to the contrary result from subjects irrationally failing to equate the opponent’s expected actions with their own and concluding that their opponent will give up at some point. I daresay it’s necessary to the bid decision to conclude that your opponent will give in sooner than you will.
Although this is all uninformed speculation. Someone clue me in? Please?

Math Explained in English: Indefinite Integration by Substitution

This is my first attempt to put a symbol-bound bit of math into English, for ease of my own comprehension and amusement of others. It took me something like an hour to parse out the explanation in the textbook because it relied on a dense bunch of nested functions, variables flying about left and right, and so forth. So I’m rewriting it to cut the comprehension time down to about ten minutes or so.
First, you should see the symbol-bound version. This guy’s web page is incredibly good on the subject.

Here’s my summary.

So, you’ve got this function you need to integrate. Suppose it’s kind of messy, like say [imagine an integration symbol here] 20x(x2+5)2 dx. Well, if you can express that as a function (f) of a second function (g), multiplied by the derivative of that second function (or a constant multiple of that derivative), then you can just substitute the variable “u” for that second function, drop the multiplication by the derivative out of your consciousness (changing dx to du to represent it, and keeping any constant multiple), and integrate the resulting much simpler function.

Sooo… in the example I gave, the g function (“u”) would be x2+5. The f function is then u2 (not, shockingly, the band), because that meets the condition described above (the second function being buried within the first function). The derivative of the g function is 2x, which is a tenth of the 20x we conveniently have sitting around collecting dust. So we can express the new function to be integrated as: [integral symbol] u2 (10)du, which, by the constant multiple rule, becomes 10 [integral symbol] u2 du, for which the integral is easy: 10u3/3 + C. Abracadabra!

Take that, textbook!

Math people: did I screw this up? Time to chime in…
Also, who wants to find me a text-sized gif or jpg integral symbol clipart?