An exercise every compiler construction student is given is to construct a grammar for simple mathematical expressions. This exercise is particularly easy with ANTLR 4, a lexer and parser generator that can accept grammars than many other compiler tools would reject. Here is an ANTLR 4 grammar for simple math expressions.
1 2 3 4 5 6 7 8 9 

For the input 4*x^2  3*x + 2
this grammar gives the following parse tree.
ANTLR conveniently handles the order of operations for us by giving precedence to the production alternative that comes first: the earlier the alternative, the higher the precedence of the operator. Also, the right associativity of exponentiation is taken care of by the directive <assoc=right>. Piece of cake. And it’s easy to expand this grammar to include other operators, like the postfix factorial (!) for example.
Mathematicians don’t like to write an explicit symbol for multiplication unless they’re compelled to. We prefer to write 4x^2  3x + 2
instead of 4*x^2  3*x + 2
. We might naively modify our grammar as follows.
1 2 3 4 5 6 7 8 9 10 

The ? after the ‘*’ indicates that the * symbol is optional–it can be omitted. But this doesn’t work as expected! This grammar parses 7 + 3
as 7*(+3)
, which is not what we want. Splitting implicit multiplication off as a seperate alternative appears to solve this problem.
1 2 3 4 5 6 7 8 9 10 11 

But now we have a new problem: implicit multiplication does not respect order of operations! This grammar generates the following parse tree for the input 4x+3
.
What’s going on? Precedence only applies to tokens, e.g. operator symbols, and our implicit multiplication alternative has no operator. (This wrong solution is common. You’ll see people on the web wondering why adding implicit multiplication doesn’t respect operator precedence.)
But we can enforce order of operations by having a hierarchy of production rules like the following general pattern [1].
1 2 3 4 5 6 7 8 9 10 11 12 13 

So instead of relying on ANTLR to determine precedence according to the order of each alternative, we bake it into the grammar itself. We might rewrite our math expression grammar as follows [2].
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 

Notice I awkwardly included exponentiation in the factor production relying on ANTLR to implicitly determine it’s precedence over implicit multiplication. This suggests that the above grammar is overcomplicated, that is, we can collapse much of this hierarchy and let ANTLR do some of the work. Here is our final grammar.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 

And the parse tree for 4x^2  3x + 2
.
Success!
You can see that there is a reason most languages–even computer algebra systems–require an explicit multiplication symbol. Requiring a * is a small price to pay for a simpler language grammar.
This is a great example of how tweaking a very simple problem can sometimes produce a very challenging problem. As a math teacher I really like this example because there is so much opportunity to teach. We review the concept of order of operations and how order of operations works in mathematics, and we learn about a strategy for dealing with order of operations in formal grammars. All of this from a very applied, very realworld problem that the students can value.
But the punchline shouldn’t be that writing a formal grammar for a math expression parser is hard. Rather, writing grammars in general is hard, language itself is hard. This challenge came up in the context of math expressions, but it could just as easily come up in the context of parsing any language. John Levine [3] calls writing a grammar with error recovery a “black art,” Cooper and Torczon [4] call compiler construction an “art and science,” Michael L. Scott [5] refers to the “art” of language design and cites Donald Knuth [6] as suggesting programming can be regarded as the “art of telling another human being what one wants the computer to do.” I like to think that what these heavyweights are telling us is that we should embrace the difficulty as something beautiful.
[1]: This pattern is described by Laurence Finston at http://lists.gnu.org/archive/html/helpbison/200508/msg00004.html.
[2]: This implementation of the general pattern is essentially stackoverflow user rici’s, http://stackoverflow.com/questions/12875573/howcanihaveanimplicitmultiplicationrulewithbison.
[3]: John Levine, flex and bison, O’Reilly, 2009.
[4]: Keith D. Cooper and Linda Torczon, Engineering a Compiler, second edition, Morgan Kaufmann, 2012.
[5]: Michael L. Scott, Programming Language Pragmatics, third edition, Morgan Kaufmann, 2009.
[6]: Donald Knuth, “Literate programming,” The Computer Journal, 27(2):97111, May 1984.
]]>Mathematica is the flagship product of Wolfram Research. It’s a very sophisticated computer algebra system with the best notebook interface on the market if you ask me. It provides the computational power to WolframAlpha and is available on “thousands of colleges and universities in over 50 countries” according to Wolfram Research’s ad copy. Unfortunately Mathematica is not open source and comes with a hefty price tag, but your institution might have a site license that allows you to install it on your personal computer.
Python has gained a lot of ground in the scientific computation space. With packages like SciPy and SymPy and the comprehensive computer algebra system Sage, Python has access to very sophisticated computational abilities. You know what would be really great, though? If we could interact with a Mathematica kernel directly from our Python code.
It turns out we can. Mathematica ships with something called MathLink that allows developers of C and C++ (and some other languages) to communicate with a Mathematical kernel programmatically. If you poke around in the Mathematica installation directory you will discover Python bindings for MathLink and a couple of example programs.
Unfortunately the MathLink Python bindings are undocumented, unsupported, and very outdated. Here I show you how I got it working on my system running Mac OS X Yosemite, Mathematica 10.0, and Python 2.7. It’s not too hard. I’ll assume you have the usual command line developer tools installed.
First we locate the necessary MathLink library, namely libMLi3.a. On my system it is found here:
1


I’m using the “AlternativeLibraries” version as per the README file located in that directory. We’ll also need the mathlink.h header file here:
1


Now we find the MathLink Python bindings and example code:
1


Since we will be editing these files, go ahead and copy them to a folder in which to work. That way, if we mess something up we have a backup of the original files.
One of those files, the setup.py file, uses Python’s distutils facility to compile and install the Python MathLink bindings extension to our Python environment’s sitepackages directory. Edit the file to reflect your mathematica version (this might not be strictly necessary):
1


Now find your platform in the ifelif block (in my case “darwin”) and edit include_dirs and library_dirs to be the location of the mathlink.h header file and the library file libMLi3.a respectively. Here’s what that piece of code now looks like for my system:
1 2 3 4 5 6 7 8 9 10 11 12 

The Python extention is mathlink.c. We need to make a minor adjustment to mathlink.c by defining MLINTERFACE to be 3 before we include the mathlink.h header file:
1 2 

That’s it for the Python bindings! Let’s compile and install:
1 2 

You should now be able to run the example script with the following command:
1


If you look inside textfrontend.py you’ll find a strange attempt by the author to add the Mathematica bin directory to the path. Since the math command is already in my path, this line is unnecessary. The script sets up the mathlink connection using the linkname argument. Consider these things as opportunities for improvement as you monkey with this example code.
Now go write some good Python code!
]]>This is a very special fresco painting by Italian Renaissance artist Raphael in the Vatican Museum called The School of Athens. It depicts the great philosophers of ancient Greece, famously with Plato pointing up and Aristotle pointing down.
Just to the right of me you can see Euclid doing some geometry on a tablet. Euclid’s contribution to mathematics is of course well known. The significance of his surviving masterpiece The Elements cannot be overstated, though the results and proofs collected in The Elements are not due entirely (or even mostly) to Euclid himself. One of the major results in The Elements is the construction of the five socalled Platonic solids and the proof that there are only five such solids. In fact, some commentators suggest that the establishment of this fact is the primary motivation of The Elements.
A platonic solid is a solid having faces all of which are the same regular polygon and with the same number of faces meeting at each vertex. If $p$ is the number of edges of each face and $q$ is the number of faces at each vertex, then each solid is uniquely identified by the ordered pair ${p, q}$ (called the Schläfli symbol). Here they all are:
Polyhedron  Vertices  Edges  Faces  Schläfli symbol  Vertex config.  

tetrahedron  4  6  4  {3, 3}  3.3.3  
hexahedron (cube) 
8  12  6  {4, 3}  4.4.4  
octahedron  6  12  8  {3, 4}  3.3.3.3  
dodecahedron  20  30  12  {5, 3}  5.5.5  
icosahedron  12  30  20  {3, 5}  3.3.3.3.3 
Table Source: Platonic Solid on Wikipedia.
Each platonic solid has a high degree of symmetry. By rotating a given solid along various axes, one can obtain exactly the same solid in the same orientation. These rotation operations under which the solid is invariant form a special kind of algebraic group called a symmetric group.
The dual of a regular polyhedron having $p$ vertices and $q$ faces is the solid having $q$ vertices and $p$ faces–one simply interchanges the roles of the vertices and edges to obtain the dual. Notice that the dual of each platonic solid is another platonic solid. This is such a satisfying fact to me.
Of course there are many other interesting facts about the platonic solids. For example, the Golden Ratio $\phi:=\frac{1+\sqrt{5}}{2}$ is related to the platonic solids. You will have to check out Wikipedia for the details.
That the ancients proved there are only five platonic solids does not diminish how mathematically interesting this fact is to me. A natural question to ask is, how many platonic solids (that is, platonic polytopes) are there in four dimensions? It turns out there are six. (One of them is trivial to find: Just add another point to the tetrahedron to form a 4simplex.) But what is really interesting is that in every dimension greater than four there are only three platonic solids. So what is special about dimensions three and four?
As I walked toward the exit of the room with The School of Athens fresco I found a painting of two platonic skeletons in a closet. It is fitting to find these geometric objects so close by the ancient Greek’s who studied their mathematical properties so extensively.
]]>The book arrived in my office mailbox in no time, and soon thereafter an invoice for $54.95 addressed to me personally arrived in my email with the comment, “60 DAY APPROVAL COPY, FREE WITH ADOPTION ONLY.” I forwarded the invoice to the secretary with the news that I had decided to adopt the textbook, and she communicated the news to the Springer rep listed on the invoice. All standard stuff.
Then a couple of weeks later I got another copy of the same textbook in my mailbox. Oh, how funny! They accidentally sent me two. I gave it to the secretary. She rolls her eyes and says she’ll take care of it. She ships it back to Springer. Emails fly. The secretary forwarded one to me to inform me that it had been dealt with. The Springer rep was suitably embarrassed. He had written, “Can you believe we sent them a second book and then charged them for the first!” Someone at Springer would apparently take care of it for us.
Then I get another invoice addressed to me personally. It must have gone out before the people at Springer took care of it. Right?
Nope. Today I got a letter from a collections agency. Dr. Robert Jacobson owes $61. Springer literally sent me to a collections agency because I wanted to adopt their textbook for my course.
]]>If professor Jacobson has a doctorate, it is appropriate to call them “Dr. Jacobson.” Using “Professor Jacobson” may also be appropriate if you know this to be the position in which your teacher is employed, though some teachers are not employed under the title “professor.” (At Roger Williams University, “professor” is usually the safest term if you don’t know if your teacher is a doctor.) Your relationship to your teacher is a professional one, and you are obliged to abide by the courtesies that professional relationship demands. It’s true that some professors don’t care much what you call them. However, many professors earned their titles through significant personal hardship, and many had to overcome and continue to face gender or other discrimination in their profession.
If your professor has told you to refer to them differently than how I’ve described above, always respect their wishes. “Call me Robert,” means that you should call them Robert.
This may seem trivial, but it’s not. If your email is riddled with errors, you are communicating to your professor that you really don’t care and that they are not important enough for you to spend the minimal effort required to write a grammatically correct email.
There is nothing more annoying than reading an email from a student explaining how they had no choice but to skip the quiz because they had a project due and then lacrosse tryouts and…. Unless you are giving birth or involved in a car accident or some other emergent event, you have a choice. Take responsibility for that choice. Just because you didn’t know your parents were going to come early to pick you up for the weekend doesn’t mean your professor has to cut you a break. When you make an argument that your hands were tied, you come across as manipulative and selfentitled. This is a great way to turn off a professor who might otherwise be willing to cut you some slack.
Telling your professor that you are going to skip their class or explaining that you can’t study for the exam because you have volleyball practice communicates to your professor that your responsibilities to the class are not a priority to you. Asking a professor to go above and beyond for you when you do not meet your minimum obligations in the class is not likely to work out well for you.
A student once skipped my exam and then sent me an email saying nothing but, “When can I make up the exam?” Uh… you can’t.
If you are asking your professor for something, recognize that they are not obligated to do it. Don’t take them for granted.
Professors and teachers, did I leave anything out? Let me know in the comments.
]]>Here are a few of my favorite topics we touched on during the hangout.
Amy described her interest in creating networks to discover hidden relationships  networks of people and networks of ideas. She used Quid and other technologies to create a network from “30 Moleskine notebooks, 3.5G or 756 voice memos, 6,000 Tweets and gigs of autotune on tpayne (don’t judge).” You can read more about Amy’s ambitious project on her blog, including a talk she gave for Quantified Self at Stanford.
If you want to do some social network number crunching yourself, Wolfram Alpha can analyze your Facebook and compute a sophisticated report of your interactions.
Jason showed us a wild video demonstrating the behavior of a very low Reynolds number fluid. We can describe the behavior of fluids with the famous NavierStokes equations from fluid dynamics. Since fluids are so common (think of all the reasons you’d want to describe air or water), the NavierStokes equations are extraordinarily important.
Despite it’s importance, mathematicians still haven’t even figured out if solutions exist. In January, Mukhtarbay Otelbaev, Director of the Eurasian Mathematical Institute of the Eurasian National University, claimed to have a solution to the Navier–Stokes existence and smoothness problem. Mathematicians are still checking his work. There seems to be some reason to be skeptical that the proof is correct.
But if Otelbaev really has solved the NavierStokes existence and regularity problem, then the Clay Mathematics Institute will award him a $1 million prize for solving one of seven Millenium Problems. These famous problems are extraordinarily difficult.
We talked about a beautiful circle of ideas related to Fourier series. Fourier series are a way of decomposing a signal (think the graph of a function) into sines and cosines.
This way of representing a signal is incredibly useful. If we truncate the infinite sum, we get an approximation to $f(x)$. We can use this to compress our signal. This is how the JPEG digital image format compresses images, for example.
More mathematical readers will notice that we are actually talking about a constellation of ideas involving Fourier series, the discrete Fourier transform, and the fast Fourier transform.
Newer digital image compression technologies (such as the JPEG 2000 standard) use wavelets instead of Fourier series. While sines and cosines are periodic functions, wavelets are localized “bumps.” Here’s an example:
We can reproduce signals by adding up shifts and scales of this bump.
If you’d like to learn more about using Fourier series and wavelets to compress digital images, check out this AMS web essay by David Austin titled, “Image Compression: Seeing What’s Not There.”
]]>But that isn’t stopping Mark Hulbert, writer for MarketWatch and a host of other Wallstreet rags, from using it to bring in pageviews that boosted his article about it to fourth most popular on the MarketWatch website.[1] And he’s not the only one.
Here’s the general idea: if you plot the Dow Jones Industrial Average (henceforth referred to as DJIA) over the last 18 months, and you do the same for the period leading up to the 1929 stock market crash, and you line them up in a certain way, then it looks like the plots sort of line up. See?
This graph, it is alleged, should make us afraid that the red curve representing the recent DJIA will follow the crash (visible on the right in this graph) of the blue curve representing the DJIA leading up to the infamous 1929 economic catastrophe. If you’re a typical person, a non datahead whose college science lab is a distant memory, then this graph might look pretty convincing. But is there anything in reality that suggests there is cause for concern?
No. Nothing. Like, not even a little tiny thing. Let’s see why.
No. Here is the real plot of the real data for both periods:
This is a little like comparing apples and oranges, so let’s “normalize” these curves by giving them the same minimum. That is, we divide all the DJIA values by the minimum DJIA value over the entire interval so that the graph is now a plot of the percentage of the minimum value of the DJIA over the given period. (We could have normalized by dividing by the first value, or the max value, but you get similar results.)
That’s still dramatically different from the chart given by the doomsaying market commentators. So how did they get their chart?
By monkeying with the scale of the yaxis, that’s how. I know of no mathematical reason to do so. In fact, the only reason I can think of that someone would want to do this is to make the plots line up. We must literally stretch out the red plot–and only the red plot–in the vertical direction to obtain the chart from the MarketWatch articles. This is “Manipulating Plots 101,” and every freshman lab science student is taught not to do this.
Sort of, but not especially well. There are a few ways of measuring how well the manipulated plots match up. One method is to add up all the differences of the DJIA values. This is called the $\ell^1$distance in mathematics and gives the area between the two curves. The smaller the area between the two curves, the closer the curves match up. (A more natural measurement for mathematicians is called the $\ell^2$distance which adds up the squares of the distances between each value and then takes a square root at the end. It turns out that nothing of substance that follows would change if we use the $\ell^2$distance.)
Let’s compare the recent period of DJIA activity highlighted by Hulbert to every other time period of the same length and see if there are periods that more closely match current activity than does the pre1929 crash period. (Keep in mind that we are comparing plots that have gone through the same nonsensical scale manipulation that was applied to obtain the original Hulbert chart.)
We expect that most time periods don’t match up well. However, it does look like the precrash curve is a pretty good match. Let’s zoom in to see what’s going on.
As you can see, it turns out there are many time periods that match recent DJIA activity better than the pre1929 crash. Why aren’t we using those time periods to predict the nearfuture activity of the DJIA? Could it be because they are exceptionally boring?
No. Not even a little bit. If we look for time periods that match up well to the pre1929 period, we find many that match much better than recent activity.
Again, we zoom in to see the detail, putting a horizontal line at the level representing how well the last 18 months matches the pre1929 period.
There are 17 periods that match better–sometimes much better–than recent history. Here I have made a chart for all 17 where I have graphed each a little beyond the length of the time period to see if our test predicts anything about the future.
As you can see, the evidence in no way supports the notion that DJIA activity that closely tracks the pre1929 DJIA activity over this time period is predictive of market decline.
Because market analysts make a living reading tea leaves. Any “analyst” commenting on this chart needs to answer these questions:
In short, how could people whose profession hinges on their ability to analyze data be so ignorant of the most elementary facts of data analysis?
[1] MarketWatch has several articles on the subject. See: Mark Hulbert, “Scary 1929 market chart gains traction,” MarketWatch, February 11, 2014. Mark Hulbert, “The chart that’s scaring Wall Street,” MarketWatch, December 6, 2013. Anthony MIrhaydari, “Ghost of 1929 crash reappears,” MarketWatch, December 6, 2013.
[2] You can download the historical DJIA data yourself here: http://research.stlouisfed.org/fred2/series/DJIA/downloaddata?cid=32255.
]]>While the target audience of this article is my fantastic calculus students, other math teachers might enjoy it as well.
When students in first semester calculus first start learning about limits, they are often asked to determine limits using the graph of a function, which we will call the graphical method, and also by constructing a table of values of the function, which we will call the numerical method. Students should be warned that these methods, while perfectly legitimate and often quite useful, are really just fancy ways of guessing the value of the limit, that is, the graphical and numerical methods do not supply us with mathematical certainty regarding the value of the limit. After all, what if your function is very sneaky and merely looks like it’s approaching a value $L$ as $x$ approaches $c$ when in fact it ultimately approaches a different value $K$?
Students quickly learn that a function $f(x)$ is continuous at a point $x=c$ if . So limits of continuous functions are very easy to compute: just plug $c$ into the function.
Here we give a very simple construction for sneaky continuous functions, that is, continuous functions that look like they approach some value $L$ as $x$ approaches some $c$ but that really approach a different value $K$.
We start by constructing a continuous function that looks very much like the constant zero function but that actually has a “spike” near $x=0$. First, consider this “tent” function $t(x)$ which we plot below:
This function isn’t exactly sneaky. We could easily make a table of $x$ values approaching $x=0$ and see that this function clearly approaches $1$. But how about the function $s(x)=t(10^9x)$? The function $s(x)$ is certainly continuous, and even for values of $x$ very close to $x=0$, the value of $s(x)$ is still zero. In fact, we would have to choose before we’d suspect that $s(x)$ is not the constant function! Our function $s(x)$ is very sneaky indeed. What happens if we plot $s(x)$? Can the computer even tell the difference between $s(x)$ and the constant zero function?
Nope! This is the number one lesson of doing math with computers: COMPUTERS WILL LIE TO YOU.
We can use $s(x)$ to construct other sneaky functions. Here’s an example of a function that looks like a nice parabola passing through the point $(1, 2)$:
Since $x^2+1$ and are both continuous, their sum is continuous also, so $f(x)$ is continuous. Therefore, . From the graph, it looks like $f(x)$ approaches $2$, which definitely is not equal to $3$. Sneaky!
You can see that we can use this same method to make all kinds of sneaky functions by adding a multiple of $s(x)$ to the function of our choice. Try it yourself in the Sage cell below.
The evidence so far suggests that MOOCs are great for a very specific kind of student looking for a very specific kind of product, but that MOOCs are really awful for some of the very groups that they are marketed to serve, namely students with less privilege (lowincome, working class), and students on the lower end of the academic success scale (little education, poor performers).
Meisenhelder recites a response to criticism of MOOCs which stuck me: “In what passes for the public discussion of MOOCs in higher education, faculty have been carefully cast by many tech boosters as backwardlooking, slowmoving, selfpromoting Luddites cloistered in our Ivory Towers.” The thing is, I think this criticism of faculty has a lot of merit. I see a lot of my professional peers in that description–way too many, in fact.
But I don’t think that’s where the criticism of MOOCs is coming from. I think it’s coming from those faculty who have their pulse on trends in higher ed, because those are the faculty who are aware enough to care. Experimentation is a good thing. Skepticism toward the current hot trend in education is also a good thing.
There is an ethical component to this debate as well. In higher education, the students who struggle the most academically are often given the worst academic tools, the leftovers, and are expected to somehow “catch up” to everyone else, that is, to make more improvement in a shorter amount of time than the successful student requires. Fulltime faculty tend to avoid involvement in remedial programs and even look down on those faculty who do get involved. (This is doubly so in mathematics. Embarrassingly, remedial education is a paria among professional mathematicians.) If we prescribe MOOCs for what ails these students, then we need to have hard evidence that MOOCs work. Right now, the evidence suggests that this particular cure is worse than the disease for these students. It would be wrong to take from them the scraps that they do have and give them even less. The ethical situation is similar for poor students.
That’s not to say that MOOCs are bad. MOOCs may be great for some kinds of students. Free access to high quality educational material is a great thing. Engaging student populations that would otherwise be disengaged is a great thing. Institutions of higher education thinking outside the box and engaging with the public in new ways is a great thing. Let’s embrace MOOCs for what they are rather than what we had hoped they might be.
]]>It’s easy to get burnt out. There are always far more talks you are interested in than you could ever attend, there are more poster presenters than you could ever talk to, more workshops and demos and minicourses than you could take advantage of. Even the vendor booths, to visit with them all, would take a full day or two, at least if you do it like I do. And heaven help you if you are interviewing at the employment center as well.
But I’m returning home with a head full of new ideas and pumped up with enthusiasm for the new semester and new research ideas, with new perspective on my career and profession.
Some of this inspiration came from the two minicourses I attended. I had the pleasure of participating in Aparna Higgins’ excellent minicourse on mentoring undergraduate research. As a new professor, I find it easy for me to have naive and unrealistic ideas about what undergraduate research should be and can be. Aparna Higgins gave me some great resources that will take me quite some time to digest.
I also participated in Karl Schaffer’s minicourse on mathematics and dance in which we explored ideas related to number theory, combinatorics, and graphs and other discrete structures. I can’t wait to share what I learned with my dance professor friend. I regret that I could not attend both sessions of this minicourse. I just ran out of steam.
But there is always the 2015 Joint Mathematics Meetings. I already have a wish list for next year. I want Stephen Wolfram to come back again so I can ask him why the Mathematica kernel hasn’t made it to the iPad yet. I want a special session devoted to mathematics on the web and a special session on several complex variables, my research area. I want vendor booths unrelated to mathematics or devoted to crank mathematics to be disinvited from the exhibition hall. I want high availability of places to charge laptops and mobile devices, and ubiquitous wifi would be nice. I want to fill out my collection of photos with famous math people. And finally, I want to find a student to do a research project with me so I can bring them along next year to experience the Joint Mathematics Meetings.
]]>I also love technology, especially mathematical technology, and the exhibit hall is full of mathematical technology. Wolfram Research always has the best swag. This year I scored a sweet deck of Mathematica cards from the Wolfram Research guys. Maplesoft was raffling various prizes, including an Apple laptop loaded with Maple 17. (They also offered a Maple workshop at the meetings which I signed up for, but it turns out it conflicted with a minicourse I participated in.) But my favorite technology exhibits are those of free, open source software projects like MathJax, Sage, and WeBWorK. There you can talk to the software developers themselves, ask them about design decisions and future directions for the software, discuss your potential uses and ask about best practices, complain about bugs, and learn tips and tricks you would never have otherwise discovered. Today I got a chance to talk to William Stein about the internal architecture of the notebook interface in his amazing new project SageMath Cloud, I told Jason Grout about an issue I had with the Sage Cell Server and asked him advice about mentoring student projects, and I got a great perspective from Peter Krautzberger about the widespread misunderstanding among professional mathematicians about modern browser technologies important for communicating math on the web.
I did manage to get to some talks. I gave a short talk in the Special Session on Geometric and Complex Analysis on results from a recent paper of mine, and I caught several of the other talks in this session.
The student poster session was in the late afternoon. It’s always a blast to talk to the students about their projects. It’s also a great way to get a feel for the kinds of projects undergraduates can work on if you are interested in mentoring student projects. The most impressive poster I saw was a project of a freshman Harvard student who investigated congruences in the coefficients of the Taylor series expansions of modular forms. This freshman student didn’t just regurgitate words she had recently learned, she seriously new her stuff–and her stuff is extraordinarily sophisticated. Unfortunately I didn’t catch her name.
The AWM poster session was also a lot of fun in part because several of my old friends from Texas A&M University where I did my PhD presented their research. (And who else should be there presenting a very impressive poster but Aly Deines, the number theorist from yesterday!)
In my last post I wondered which famous mathematician I would run into. The answer? Ken Ono, master of the partition function.
]]>We talked about Google+ and mathematics and academia and blogging. I loved my conversation with the graduate/undergraduate students in the group about mathematics and language. It reminded me a lot of the Mathematics Community’s Hangout On Air from last November.
Sometimes the best interactions are completely spontaneous. While waiting to talk to William Stein about Sage, I had a great conversation with his student Aly Deines comparing and contrasting complex analysis and number theory. And David Lippman, developer for IMathAS, happened to be talking with Peter Krautzberger when I was asking about technologies for communicating mathematics on the web. David had some very helpful things to share with me.
It’s fun to meet “famous” math people, too. In previous years, I got to meet Marvin Minsky, Steven Wolfram, Steven Krantz, Gerald Folland, and some other well known figures in the mathematics community. I wonder who I’ll meet this year.
]]>Because I will be teaching first semester real analysis this semester, I spent a lot of time in the MAA Session on Topics and Techniques for Teaching Real Analysis. I loved Judit Kardos’s talk “Constructing continuous functions” in which she describes how she teaches her analysis students to construct continuous functions with pathological properties. Can you construct a continuous function on $[0,1]$ for which every point in its range has an infinite preimage? Or a monotonic nowhere differentiable continuous function with finite finite length? Kardos’s technique is to use the fact that a function on $[0,1]$ is continuous if and only if its graph is compact. She then intersects a sequence of compact sets with fractallike properties to obtain graphs of continuous functions with bizarre attributes.
My favorite talk from that session is Cesar E. Silva’s talk “One Proof Is Not Enough,” in which he reminded us that it can be beneficial for undergraduate real analysis students to see several different proofs of the same result. Silva presented several charming proofs that the real numbers are uncountable (which he collects in his paper on the arXiv), some of which are not well known. We all know Cantor’s diagonal argument, but it was not the first proof Cantor published of the uncountability of the reals. The first proof he published goes as follows. Let be a sequence of real numbers in $[0,1]$. Let be the smallest $k$ such that is in $(0,1)$. Now let be the smallest $k$ with such that is in $(0,1)$. Set and . Now choose to be the next elements in the sequence that are in . Iterate this process, generating nested intervals . The intersection of compact intervals is nonempty, but contains no points in . Thus cannot contain every point in $[0, 1]$, proving that $[0,1]$ is uncountable.
Cantor’s second proof also uses the fact that the intersection of compact sets is nonempty and is just as elementary. Call a set perfect if the set is equal to its set of accumulation points. So in particular, the interval $[0,1]$ is perfect. Now suppose (seeking a contradiction) that $P$ is both a countable set and that $P$ is perfect. Let . Since $P$ is perfect, contains infinitely many points of $P$. Choose to be an open interval about one of the infinitely many points in such that and . Iterate this process, choosing such that . Now consider the intersection of compact sets . This intersection is nonempty, contains accumulation points of $P$, but does not contain any of the . This is a contradiction. Thus perfect sets cannot be countable. In particular, $[0,1]$ is not countable.
Silva gives more proofs in his arXiv paper, including a game theoretic proof due to Matthew Baker, which are worth reading.
It’s already late, and I haven’t worked out my schedule for tomorrow. That’s the thing about the Joint Meetings: you never have enough time to do it all!
]]>I’ll be blogging the conference as the week progresses.
If you are a math blogger or Google Plusser who will be at the Joint Meetings, join us for an informal meetup on Friday. I will update this post when we figure out a date. Hit me up on Google+ if you have suggestions.
UPDATE We are meeting by the message board at 11:30am on Thursday. Be there or be $[0,1]^2$!
]]>My colleague was barely keeping herself together. “There was nothing you could have done. You did everything you possibly could have,” I told her. I told her about one of my students from the previous year. “She was raped last year. She went from being my best student to my worst student overnight.” We stood there together in my office doorway, both on the verge of tears. “We can do the best we can, but we cannot keep them from harm. There is nothing we can do to make everyone safe.”
It was just last week that another colleague vented to me about one of her classes. A student in that class had crashed his car the previous weekend. He lingered in the hospital for a few days with horrific injuries before passing away. A group of his friends were also in the same class. She described how difficult it was to face those students, how they were supposed to be giving inclass presentations, how one of the students ran out of class in the middle of his presentation sobbing, how she chased him down and sat with him on the men’s room floor. “I told them,” she said, “not to bother coming to the last week of classes. I just couldn’t do it anymore.”
There is this professional distance we professors often try to maintain. It’s a delicate balance sometimes. I want to be friendly with my students, but not their buddy. I want to take an interest in their lives, but I’m not going to their parties. We talk with each other about whether or not we should accept friend requests of students on Facebook. My colleague, the one with the car crash victim, told me about dealing with the friends of the dead student. “And it really sucks,” she said, “because you’re not supposed to hug them.” “I think you can,” I responded. “I think in this case, you can hug them.”
“Yeah,” she said, “I did.”
]]>