I know I’ve been too busy to post anything here for a while now, but I came across something recently that I found so cool that I had to share it. Usual disclaimer: This is a very handwavey explanation.

Most people think of black holes as “a thing that nothing can escape from, not even light”. And this is correct, modulo some quantum stuff. You also may know that general relativity tells us that the gravitational force is actually a warping of space-time itself, and that at the centre of a black hole the spacetime is so curved that it becomes singular.

Obviously you’ve spotted the contradiction to Fermat’s last theorem there too, right?

A curvature singularity is thought of like the hole in ground in the above image from Homer’s trip into 3D land; it becomes infinitely curved. It is generally assumed that inside a black hole there is always one of these singularities, and in fact the standard picture of gravitational collapse tells us that this should always be the case.

However, some recent work by Prof. Dr. Frans Klinkhamer from the Institute for Theoretical Physics at Karlsruhe Institute of Technology has come up with some unusual solutions to Einstein’s equations corresponding to black holes without singularities. Now this in itself is not entirely new, because people have found similar solutions by inserting some weird “exotic matter” into their models. However Klinkhamer has come up with solutions to Einstein’s equations corresponding to black holes without singularities, without introducing weird matter. In fact, these solutions have NO matter in them; they’re entirely empty universes.

There is however something peculiar about his universes, which allows for these black holes. The difference is in what mathematicians call the topology of the universe. Inside the event horizon, the universe twists around itself; not entirely unlike a pretzel.

As I mentioned above,  this kind of black hole is a perfectly valid solution to Einstein’s equations but it doesn’t agree with the standard picture of gravitational collapse. There is no proposed mechanism for this weird twisting of the universe to occur naturally from the gravitational collapse of a massive object, and in fact there are theorems saying that when gravitational collapse occurs the end result is actually a singularity. But who knows? Perhaps when we have a working theory of quantum gravity, we will come up with a mechanism to avoid the singularity and end up with one of these solutions instead. Or perhaps this is just a mathematical quirk and nothing more?

The most recent paper on this stuff can be found here (not for the faint of heart): http://arxiv.org/pdf/1304.2305.pdf

I’ve been a terrible blogger lately, and I promise I’ll get back to it soon. In the meanwhile though, I’ve been getting regular hits to this page so I thought I’d leave the most recent post as a list of some of the more popular posts. You’ll be able to tell which ones people got to by accident, looking for something entirely different, but I’ll leave them in.

Feel free to leave some comments too; comments might help encourage me to get back to blogging 🙂

An oft-quoted line from teacher to student is “There is no such thing as a stupid question”, which I think is a good sentiment to alleviate the fear of asking questions. Although I believe a good deal more than that is required to ensure that students are comfortable asking and answering questions in class, this is not what I plan on talking about today.

Today we’re going to talk about a special kind of stupid question. Of course there are stupid questions; Why did you lock your keys in the car? Would you like free upgrade to first class?  What’s your star sign? Should I go to grad school? What’s that smell? And many more… Then there are things like these beauties from the Australian Academy of Science. But if you’re observationally gifted, you may have noticed that this post was filed under education. Today we complain about education again.

Far too often, I’ve seen test questions that I don’t remotely agree with. I don’t disagree with the solutions — I just don’t believe that the questions achieve their purposes. Since I’m not nearly at a point in my career when I can carelessly insult specific questions that I’ve come across, I won’t. But this kind of stupid question is so prevalent that it wouldn’t be fair to isolate any particular culprits anyway. The kind of stupid question that I’m talking about is one requiring unnecessary complicated calculations or assuming that the student is familiar with some particular example. And here I’m referring to test questions only — timed tests should be approached differently to assignments, where plenty of unstressed thinking time is available. Let me illustrate what I mean with some fictional examples:

• Testing l’Hôpital’s rule with an example that requires 6 iterations before getting an answer. That is, the student must perform the same technique 6 times over before arriving at an answer.
• Testing primary school geometry by asking what the interior angles of a stop sign are. Not knowing the shape of a stop sign is different to not knowing what an octagon is.
• A separable first order ODE that requires a page of working to evaluate the integral.
• Asking students to reproduce a proof that was included as an optional exercise earlier in the semester. Students will do a different random selection of problems each; you’ll just be measuring which students picked the same question that you did.
• Having a needlessly ugly solution to a problem — Like $4736\sqrt{17}-Log_e (7)\pi^{1/4}$. Many students will assume it’s wrong and either start again or abandon the problem entirely.

Credit to this dude, who has a page full of these drawings.

These problems don’t only cause students who understand the concepts to get the questions wrong, they can also make students waste time and become unnecessarily stressed. And just to reiterate the point, I think these are terrible test questions but perfectly good assignment questions. Test questions should test concepts, not who was listening to the example given in lecture 17 or who can perform pages of computations the fastest.
\end{gripe}

sorry for being such a useless blogger lately. I started off posting as often as a few times a week and now it’s been over a month since my last post! It could be the three distinct talks that I’ve had to give recently, or that my PhD is reaching crunch time, or the fact that I now have a girlfriend; somehow I’ve not found the time to blog though. As I’m currently visiting a GR program at MSRI and staying in a house in Berkeley Hills, away from distractions, I can type up a post I had sitting in the ideas pile.

Pick a number – any number…

What did you pick? Was it 7? Or pi? or $1+e^{i\pi}$? I mean, pick ANY number. It doesn’t have to be a whole number, or even a rational number. If you were to choose a random number — a truly random number — out of all of the real numbers, then the probability that it was a whole number, or even a rational number, is zero. Take a look at My Infinity is Bigger Than Yours to figure that one out, because that’s not exactly the problem I’ll be talking about today. What I’m going to convince you of today, is that the number will certainly contain a 7 as a digit — 100%.

Let me explain.
The random number can – and will be – arbitrarily large so it will have many many digits. If I had a 100 digit number then the probability that the first digit is not a 7 is 9/10, the same for the second digit and the same for the rest. So the probability that none of the digits are 7 is given by the product of the individual probabilities — $(9/10)^{100}=0.00003$. That’s a 3 in 100,000 chance. And if we start looking at numbers with 10000000000000 digits, then this probability quickly goes to zero.

Alternatively, you could consider how many numbers less than 10 have a 7; then how many less than 100 have a 7, etc. and you would see that this ratio approaches 1 as you start including higher and higher numbers.

Then again, does it even make sense to “pick a random number”?

It seems I’ve been way too busy lately to actually post anything here, so today we’ve got a guest post from David Hartley.

With Australia making it back into the world group of the Davis Cup after six years and Steve taking the week (month?) off, I thought this would be a great opportunity to write a guest blog. Now I will try to keep this in the same vein as Steve’s blogs but I suspect I won’t be able to stop myself including some equations (the worst ones I will provide links to instead of including in the text).

As a mathematician and sportsfan I often find myself checking out various statistics (for example F1 sector and lap differentials) and trying to predict what will happen and the end of the match/race. One of the most interesting sets I’ve found are tennis statistics, especially since you can win more points than another player but still lose in straight sets. And because most games end with only slightly different amounts of points won but again are over in straight sets. In this blog we will see why this is so.

A while ago (nine and a half years apparently) I came across a maths question in an interesting probability book. It asked: If the probability of you winning a point is $p$, what is the probability of you winning a game of tennis? Having just completed high school and therefore knowing some basic binomial theory, I thought I should tackle this.

Firstly $p$ will be a number between 0 and 1 and represents the likelihood you win a point (For example if you win 55% of points then $p=0.55$). Next the probability of two (or more) events happening is the product of their probabilities, for example the probability of you winning a game of tennis to love (nil) is $p\times p\times p\times p=p^4$ – that is, the probability of winning four straight points.

The next best way to win a game of tennis is by winning a point after being up 40-15 (or 3-1 if tennis had a sensible scoring system). This sort of game can occur four ways, since your opponent can win their point on the first, second, third or fourth points of the game (this is often written as the binomial coefficient $\binom{4}{1}$). Therefore the probability of you winning to 15 (scoring a point from being 40-15 up) is $4p^4(1-p)$, since you lose one point with a probability $1-p$ but win four points each with a probability $p$.

Likewise to win a game to 30 the probability is $10p^4(1-p)^2$, since there are ten different ways they can win 2 out of the 5 points (or ten ways you can win 3 out of 5 points if, like me, you prefer to keep track of the points you win).

So far, so good but what happens next? You can’t win from 40-40, instead you would need to win the next two points. Therefore the probability of winning is $20p^5(1-p)^3$. Unfortunately/fortunately this isn’t the end of the story. If you only win one of the next two points you are not out of the game but back at deuce, thus we seem to be trapped in a loop. To get out I’m afraid we need to use math(s) (Note to editor: go to hell Steve it’s maths!).

We know that the probability of getting to deuce is $20p^3(1-p)^3$, if we let $d$ be the probability of winning from deuce then $d$ is given by the sum of the probability of winning both points and wining one of the next two points then winning from deuce. Mathematically it is written as:

$d=p^2+2p(1-p)d$.

This can be solved for $d$ to give:

$d=\frac{p^2}{1-2p(1-p)}$.

With that little trick of algebra the problem is solved and the probability of you winning the game is:

$p^4+4p^4(1-p)+10p^4(1-p)^2+\frac{20p^5(1-p)^3}{1-2p(1-p)}$.

To get a feel for the formula assume you win 55% points on average against another player (a relatively small difference considering a tight set might have around 60 points), then the probability they win a game is 62.3%. A steep increase indeed (the slope has a maximum value of 2.5 when $p=0.5$). Figure 1 shows the probability curve.

The probability curve for winning a game of tennis. Sexy, huh?

The situation gets worse if you consider a set of tennis. For this we need some extra formulas such as those for the probability of winning a tie-break and both tie-break set and advantage set. The probability curves for an advantage set (Figure 2) shows that if you have a 55% chance of winning a point, you get a 82% chance of winning the set. Remember if there are 60 points in the set that means on average the points won would be 33-27.

The probability curve for winning an advantage set. Mmmmm Matlab.

The curves for the probability of winning a match can also be done and they show that for a five set match where the last set goes to advantage, a person who wins 55% of points has over a 95% chance of winning the match.

The probability curve for winning a five set advantage match. Wowwee, don’t bother showing up unless you win more than a third of the points on average.

Of course this analysis is very simplified. For instance if anyone remembers Wayne Arthurs then they will know that the probability you win a point on serve, 73% for Arthurs in 1999 leading to 91% of service games won (our model gives 93%), can be vastly different to the probability you win a point while receiving, 29% in the same year with 8% of receiving games won (8.8% by the model). Including both parameters will change the formulas for the probability of winning a tie-break game or either type of set; since mathematically it makes no difference if you serve first or second in the set, the match formulas don’t change. Using those formulas the data tells us he had a ~59% chance of winning; his actual winning percentage that year was 51% (he only played 27 matches so the data set is small). However, the moral of the story is that even if your favourite player has a scoreline that looks like they were easily beaten, this may just be a result of compounding probabilities.

You may see this title and immediately think that this is the second part of a two part post; you’d be wrong. This title is an example where intuition can be wrong — or an example of sneaky buggery. Or something? This is to prepare you for a little probability problem. At first, probability sounds like it must be the most intuitive thing ever; if something happens 50% of the time then there’s a good chance that it will happen in around 50 of 100 trials. But it turns out that probability can actually be quite the sneaky bugger. We’ve already seen the Monty Hall problem here, which seemed like quite a simple little probability problem but turned out to be quite counter-intuitive. Today we’ll look at another little probability paradox* that I was recently reminded of.

*Not actually a paradox.

First let’s look at a simple problem.

Mrs Robinson — here’s to her — has two children; at least one of whom is a son. What’s the probability that she has two sons?

How stupidly simple does this problem sound? It’s about 50%, right… We know that one of them is a son, so the other one has a 50-50 chance?
Of course not! Probability is far too sneaky for that.

Since we know that she’s got at least one son, there are three possibilities:

• She has two sons,
• The older sibling is a son while the younger is a daughter,
• Or the older is a daughter while the younger is a son.

We’re assuming here that they weren’t at the the exact same time, but this is only so that I can easily talk about the different scenarios. You can convince yourself that these outcomes are equally likely by considering the 4th possibility; if we didn’t know about one of her sons then she could have two daughters. These 4 outcomes are clearly equally likely before we have any knowledge of a son. By saying that at least one is indeed a son, all we’ve done is rule out the fourth option.

Since there are three equally likely outcomes, there is a one in three chance that she has two sons! Take that, intuition!

So now let’s change the problem ever so slightly.

Mrs Robinson has two children; at least one of whom is a son who was born on a Tuesday. What’s the probability that she has two sons?

Oh get stuffed! What does Tuesday have to do with anything? These are probably things that you’re thinking to yourself. The answer is obviously the same as before… right… isn’t it?

Ha! Probability and it’s swift kick to the brainbits strikes again.

Let’s write down all of the (equally likely) possible cases again:

• The older is a son born on a Tuesday and the younger is a daughter.
• The older is a son born on a Tuesday and the younger is a son born on a Wednesday.
• The older is a son born on a Thursday and the younger is a son born on a Tuesday.
• The older is a son born on Tuesday and the younger is a son born on a Thursday.
• … There appears to be a whole lot of other possibilities in this scenario.

The difference here, is that we are being more specific about the boy now. If we had said “This is Johnny, Mrs Robinson’s son; he’s got one sibling.” then it would be obvious that there was a 50-50 chance of his sibling being a brother. The more specific we are about her son, the closer the probability gets to a 50-50. If we add up all of the cases in the Tuesday example, we’d actually find that there is a 13/27 chance of Mrs Robinson having two sons.

Convince yourself! Write up a table of the possibilities and check it out.