Wednesday, November 5, 2014

Multivector momentum

I trained as a Mathematician, and I try at times to understand some developments in modern Mathematics. Mostly I do this by following John Baez who writes beautifully so that you imagine that you understand it. I’m only vaguely following his current interest in networks, but I loved this diagram (from for an unrelated reason:

\dot q
\dot p
Mechanics: translation
Mechanics: rotation
angular velocity
angular momentum
flux linkage
pressure momentum
Thermal Physics
entropy flow
temperature momentum
molar flow
chemical momentum
chemical potential
The first row (after the headings) is the familiar fact that a moving body has momentum which keeps it moving in a straight line at constant speed. To change that requires effort: the application of a force. We are equally familiar in life with the fact that a spinning body (such as a top, or the Earth) has angular momentum which keeps it spinning, and you have to put in effort to stop it.
In the 4th line Dr Baez is thinking of liquids in pipes, but I am more interested in unconstrained gases. If you imagine an explosion then that has pressure momentum which will keep it expanding forever. It requires effort to stop the expansion and that effort would be pressure.
I also want another row which might be the same as Dr Baez’s 6th line. At any rate the momentum in this case is heat. Things will keep the same amount of heat unless some effort (heating/cooling) is applied to change it. This might seem different to the others, since the others are about movement, but actually heat is just internal movements within the matter. The temperature of a gas is proportional to the kinetic energy per molecule.
Now I need to take a detour before I can put these together.


A vector is something with magnitude and direction. Speed is just a number, but velocity (and momentum and force) are vectors because they also include the direction. We imagine vectors as arrows in space with the length representing the magnitude of the vector. You can slide them around: they don’t start at a particular place. Our vectors are three dimensional: every vector can be made of a bit of x, a bit of y and a bit of z.
You can combine two vectors to make a bivector. This is called the exterior product. To imagine this we place the 2nd vector so that it starts where the first finishes. This makes a little parallelogram floating in space, and the area is the magnitude of the bivector. However the bivector is not really a parallelogram: it can be any shape in that plane with that area. And there is a natural way of adding bivectors and they are also 3 dimensional: every bivector can be made of a bit of xy, a bit of yz and a bit of zx.
The exterior product of 3 vectors is a trivector. We can visualize this as a bit of volume with the 3 vectors at edges, but once again it is just the amount of volume that counts not the shape. Trivectors are just one dimensional, since each is just a multiple of the 1x1x1 xyz volume element.
There are nice diagrams of all these at Wikipedia’s Geometric Algebra page:
It turns out to often be a useful idea to combine our 3 dimensions of vectors, 3 of bivectors, 1 of trivector plus 1 more for scalars (scalars are just numbers). This makes an 8 dimensional space of multivectors. These are mostly particularly useful when combined with a way to multiply them, but we won’t get to that here.

Momentum types

Ordinary momentum is a vector. It has magnitude and direction.
Angular momentum is best seen as a bivector. The area of the bivector is the magnitude, and its orientation (perpendicular to the axis of rotation) determines the rotational geometry.
Since pressure acts in all directions at once, it is natural to see it as a trivector.
Finally we have heat which is intrinsic to the matter and not involved in directions. It is just a scalar value.
It is easy to combine these together into a single multivector value. The question is how meaningful that is.
One aspect is this: Are there other mechanical momentums that this leaves out? I conjecture that for an infinitesimal amount of matter this is all there is.
Speaking of infinitesimal amounts of matter: It is tempting to think that a small amount of matter can’t have much angular momentum. But actually the smaller matter is, the faster it is able to rotate. Even electrons have significant spin (though the mechanicalness of that might be in some doubt). I wonder if the study of fluid mechanics takes adequate account of small rapidly spinning vortices?


It seems cute, but that is not a justification. It does suggest two lines of investigation:
One is to study fluid mechanics in full generality starting with infinitesimal amounts of matter having these 4 types of momentum. The hope will be that some of the research on Clifford Analysis will turn out to be useful.
The other is to use this in computer simulations of oceans, atmosphere or other fluid situations. These are typically done by dividing the matter to be simulated into little cuboids. The cuboids have some physical characteristics, and the value at the next step is determined from the current value plus the values in neighboring cuboids (plus other forcings, i.e. effort, that may be specified). The hypothesis is that this 8 dimensional value, across 4 types of momentum, is the optimal choice for the value to be stored in each cuboid.

Sunday, November 2, 2014

Saving Test Cricket

[Update 3rd Nov: Modified the proposed algorithm to avoid "moral hazard". Will I lose the +1s when I edit? I should. We'll see.]
It is nice to have the 3 levels of Cricket, corresponding roughly to sprint, middle distance and marathon events in Athletics. However Test Cricket seems to mostly survive on cultural links to the past. I don't have a problem with the duration of the game or the shortage of action, but there are problems which need to be, and can be, fixed:
  • Timeless Tests had serious practical problems that were exposed in the last years of the 1930s. The solution of limiting Tests to 5 days and allowing draws has been worse (though it has produced the occasional exciting moment).
  • Games need a mercy rule so that they don't go longer than necessary when they get one-sided. Teams batting on and on when they already have more than enough runs is bad for players and spectators.
  • The deterioration of wickets through the course of a match gives an excessive advantage to the team winning the toss.
Luckily I have a simple solution to all 3 problems:

The solution is to have the two teams' innings in parallel (as near as practicable). Every 30 overs the teams switch over who is batting. Except that, when one team is behind on runs and has lost more wickets then they are given 2 consecutive batting segments. Here's why this solves all the problems above:
  • The deterioration of the wicket is no longer a problem because it affects both sides equally. In fact it is a good thing because:
  • Wickets can be made to start well but not last longer than 4 days so that wickets fall rapidly from the 5th day onwards. This stops matches going too long, but avoids the need for a time limit. It also encourages teams to score quickly while the wicket is good.
  • This is a perfect mercy rule. The winning team would rarely do more than 30 overs batting beyond what is needed to win the match.
[30 overs is chosen to fit with the natural breaks in the game.]

Saturday, November 1, 2014

group selection of humans

Our species is social, and we have a lot of adaptations designed to support cooperation. The obvious explanation is group selection: groups with those adaptations were more successful than those without. The trouble is that it is very hard for group selection to succeed. Cheating genes, which take advantage of the cooperation of others without reciprocating, seem certain to overwhelm the honest players.

This confused me for a long time, until I read Jared Diamond's "The World Until Yesterday". The key point is the large amount of inbreeding within primitive villages. Most people marry cousins, so that all the people in the village are very closely related. There is some gene transfer with neighbouring villages, but little beyond that.

Group inbreeding potentially creates a situation similar to social insects, where every individual is closely related to everyone else, and in particular to the reproducing females. This allows the group to function as the unit of evolution so that group selection can operate and cooperation evolves. So it seems obvious that this must be the normal (i.e. pre-civilization) human situation.

Of course inbreeding can't be taken too far, and we see that it is natural for high status individuals to have the privilege of partnering outside the group. So how much inbreeding is there? Let me guess that a balance is maintained. Groups with too much inbreeding lose from the direct genetic cost. Group with too little inbreeding lose by failing to maintain group selection and being invaded by cheating genes.

This is non-expert speculation. However it is an important area to understand because it has obvious implications for the future of humanity. We have left our traditional lifestyle behind so quickly that the evolutionary effects have not had time to reveal themselves.

Friday, October 24, 2014

Security versus Privacy

The difference between security and privacy is not that hard. If others can see my bank transactions then I lack privacy. If others can take money out of my bank account then I lack security. However important people think privacy is, they must admit that security is much more important ... Right?

In fact we see an endless stream of postings that seem to be completely muddled about the difference between security and privacy. There are good reasons for that:
  • Security is about the protection of our assets, including life and health. Privacy is the particular asset that is most immediately compromised by security breaches in the information industry
  • Often we use privacy to protect our security. This is most obvious in the way we protect passwords and PINs. Anonymity, an extreme form of privacy, is used as a security mechanism by those doing things, good or bad, that others would, rightly or wrongly, seek to punish.
  • An important information security mechanism is public key cryptography, where keys come in pairs: the public key that is made available to all; and the private key that the owner holds and does not share. In this case the word "private" is to distinguish it from "secret": A secret, and in particular a secret key, is something that is shared.
Still it is hard to understand the confusion, because we see that privacy is something that most people seem prepared to give up very lightly. Most of us enter "customer relationship management" schemes for very little reward, put details of our life on social media, tolerate privacy invading indignities at airports.

I ask people pushing for protection of privacy to address actual privacy issues. If they are actually interested in privacy as an enabler of security then they need to include some evaluation of how effective it is in that regard. When governments lacked the ability to penetrate the anonymity of protesters then it was effective. Would it still be effective if governments said that they would not use their new capabilities to penetrate people's anonymity? Pardon my doubts.

Most particularly I want people to acknowledge that the greatest protection of our security against the government, and against the oligarchs, is transparency and accountability. Transparency is anti-privacy, and that is what we need. People advocating for individual privacy need to be explicit about why it is not going to weaken transparency. People who are not advocating for transparency are not the good guys, and their privacy concerns can be dismissed.

Friday, October 17, 2014

Holes in Networks

Thinking about holes is often the best way to understand what is going on.
The first example we usually encounter is in electricity. Suppose we have a solid with each atom holding its electrons in place. If we add an extra electron it is easy to imagine it flowing towards a positive charge (though that might be an oversimplification). But suppose there is a missing electron. The neighbour that is closest to a nearby negative charge will be the one that tends to move into the hole. This moves the hole closer to the negative charge. And so we imagine a sequence of electrons moving into the hole, so that the hole moves towards the negative charge. Well this is a simplification (since electrons are indistinguishable), and it is not the best simplification. The best simplification for human understanding is that the hole is a positive entity which flows towards the negative charge. And in fact the hole behaves for most purposes very similarly to a positively charged electron.
A rather recent example is in Computer Science. It is that you can do a formal differentiation of a type constructor and get a new type which is a one hole context for the original type. It is, in some sense, the original type but with a hole in one slot that can be filled. Dan Piponi has blogged about this (e.g. The idea might be due to Connor McBride, but I expect that it is known earlier in related Mathematics. Anyway it seems like a very general way of traversing data structures.
It is interesting to think about economics. Consider a job vacancy. That is a hole in some sense. Indeed filling the vacancy might generate a new vacancy somewhere else, and so on, much like the electron example above. This applies to other resources, not just labour. So the need for a resource is like a hole. It increases the price which, in the short term, removes the resource from those who could barely afford it.
Does this have anything to do with differentiation? I don't know, but it is interesting to consider Steve Keene's idea: that a major driver/indicator for the economy is not the rate of change debt, but the derivative thereof. In other words the second derivative of the amount of debt.
I'm also trying to understand cohesion, as in Lawvere's article reproduced at I don't pretend to understand it, but it is interesting the way it talks about distinguishability. This applies to resources too. People have special skills, and the economy works best when people can all be employed maximizing their skills. But we know that when things get bad then people end up in more generic jobs, like driving taxis and labouring, where individuals are less distinguishable. And something similar may apply to other resources where varied and sophisticated use of resources happens more when the economy is functioning well. We can perhaps also consider the case of ecology, where productive ecosystems have high diversity, but weedy general purpose species take over when things are bad.
All this is highly speculative, but why stop there. Here is an analogy between the economy and phases of matter.
  • When things are going well, the economy is like a liquid. In a liquid there is high interaction between the molecules, but also high mobility of molecules. We see a similar thing in the interactivity and mobility of labour and capital in the most successful economies.
  • Feudal systems are like a solid. Labour and capital are stuck in a fixed relationship to each other. This promotes specialization but without mobility the specialization is often non-optimal. Feudal societies were the home of the craft guilds.
  • Anarchic economies, when there is no effective rule of law, have plenty of mobility but too little interaction. This is like a gas.
And, as we know, you need enough pressure to get a liquid. Otherwise things sublimate directly from solid to gas. I leave the economic interpretation of that to the reader.
Anyway, getting back to subject, I think anyone thinking about the dynamic behaviour of networks should think about holes in the network. And I suspect that has something to do with differentiation (in some funny sense) and/or cohesion.

Wednesday, August 13, 2014

safer team ball games

Even the safest ball games involve players running to the same ball. The result is head clashes and boot-head interactions that cause concussions. It would be safer if the access to the ball changed sides in some way. I had an idea.

The idea came from a game I play with one of my grandsons. You just need a ball that bounces on the available ground, and some marked small enclosed area between the players. The players take turns batting the ball with an open palm, and they have to bounce the ball exactly once in the enclosed area or lose the point.

So I thought of players bouncing the ball to their team mates. Call the team in possession the attackers, and the other team the defenders. Until the ball bounces only the defenders are allowed to try to get the ball, and after the second bounce only the defenders are allowed to get the ball. After the first bounce only the attackers are allowed to get the ball, and of course naturally the attackers will bounce the ball deliberately to their team mates. This is perhaps still to easy for the attackers and maybe the attackers have to get the ball after the bounce and before the top of the trajectory after that first bounce.

The point is that the players on opposing sides aren't going for the ball at the same time. It leaves a lot of room for adjustment of other rules to make an interesting game. I would suggest that the player is only allowed to take possession of the ball and then throw it if he is stationary (i.e. no foot comes down between first touching the ball and throwing it). When moving you can only bat the ball with an open hand. A score might involve bouncing the ball to a team mate in the end zone.

Saturday, July 26, 2014

Bridge bidding arithmetic

[I don't suppose any non-Bridge players will read this. But just in case I'll give some info in square brackets at times. In Bridge players partner in pairs who have to cooperate in bidding. The (pseudo-)auction involves bids by the four players (partners opposite) in order 1c,1d,1h,1s,1nt,2c,...7nt. One way to cooperate is a relay where one player always makes the cheapest bid and the other describes their hand. The hand has 13 cards. We say a hand is 3541 if it has 3 spades, 5 hearts, 4 diamonds and 1 club. "Game" is 4h or 4s or 5c or 5d (or 3nt) and scores a lot extra. Slam (12 or 13 tricks out of 13) scores even more.]

After a relay the asker needs to set the suit below game [to allow slam investigation]. We need to do this below 4h, so it can be done starting at 3s, for example: 4d sets spades; 4c sets hearts; 3s sets a minor and responder always bids 3nt after which 4c sets clubs and 4d sets diamonds.

So lets assume for the moment that we are only showing distributions. How many different distributions can each bid now show:

3h: 1
3d: 1
3c: 1
2nt: 2 (asker then bids 3c and responder can bid 3d or 3h with the 2 distributions)
2s: 3 (asker then bids 2nt and responder can bid 3c or 3d or 3h with the 3 distributions)
2h: 5 (asker bids 2s, responder bids 2nt with 2, 3c/3d/3h with others)
2d: 8 (...)
2c: 13 (...)

Which continues to be the fibonacci numbers (each number the sum of the previous 2): 21,34,55,...

So how many distributions are there:

Though the cumulative total goes up linearly and fibonacci goes up exponentially, still we don't have a lot of room to cover low probability distributions. And certainly we can't afford to waste much space.

One can consider what order to show distributions in. A simple scheme is: (a) show more balanced hands first (lower number of distribution points [doubleton=1, singleton=2, void=3]); (b) within that constraint show hands with more major cards (i.e. in decreasing number of hearts+spades); (c) within that constraint show in decreasing heart length.

The reason for showing more balanced hands first is that this makes slam less likely, so asker can often quickly bid game after a lower bid, without relaying it out and telling the defenders more than necessary.

One of the objectives of bidding is to find useful trump fits. [It is desirable to have 8 trumps. Hands without an 8 card major (heart/spade) fit are usually best in 3nt if possible.] Relaying wastes space compared to having both partners contributing. The question of how to best use both partners is a difficult problem. A simple approximation/guess is to use binary search. Here's how that works:

Taking a suit at a time in some algorithmically defined order the bidder cuts the length of that suit in half and bids one step with the higher (or lower) range and otherwise goes up a level and repeats.

Suppose you are playing "Romex" and a 1NT opening is 20+ points [ace=4,k=3,q=2,j=1]. One way to play the responses follows. Note that 1 step is towards the middle, bypass shows more extreme.

At any time by either player: 3s/4c/4d set the suit, as described above.
2c: 0-4 points. Higher bids forcing to game.
2d: 0-3H
2h: 4-7H 0-3S -- beyond this we further breakdown the majors
2s: 4-5H 4-7S
2nt: 6-7H 4-5S
3c: 6H 6-7S
3d: 7H 6S 0C 0D made it with a bid to spare

That worked well because narrowing the range in the majors also narrowed the range in the minors. But in other auctions we can find all 8 card major fits but the minors don' get narrowed down (as much). For example here are continuations after 1n-2s:
4c/4d: setting major with 4, otherwise deny 4
2n: 2-3S (i.e. matching the top half of responders 4-7)
3c: 3H 0-1S (ie hitting top of openers 4-5H range)
3d: 0-2H 1S (hearts eliminated)
3h: 0-2H 0S 4-5C 4-7D -- With 11-13 minor cards can be 47 56 65 or 74.
and a few more.

Given the fibonacci connection it might be better to split unevenly with approximately 38% in 1 step and 62% beyond (1:1.62 = 0.62:1 is the golden ration and consecutive fibonacci numbers tend towards that ratio). But this is not nearly enough and low probability distributions have to be lumped together to make it work when starting at 2d. Realistically one also needs to start lower or start with a restriction on the range of distributions. And indeed that is commonly the case for auctions starting lower than 1nt.

Tuesday, May 6, 2014

Mathematics and programming collide

Mathematics is (by my definition) about thinking clearly about problems (that have been made sufficiently precise, sometimes by simplifying assumptions). Programming is about thinking clearly about algorithms.

So we see that programming should be a part of Mathematics. And, lo and behold, now that we finally understand how to do functional programming, it makes heavy use of concepts from Category Theory which is a modern unifying part of mathematics. It would be good if more of the best Mathematicians became interested in this very important application of modern Mathematics.

And this is likely to happen because, from the other direction, Mathematicians need computers to control the complexity of modern Mathematics. This is described by Vladimir Voevodsky (one of the world's top mathematicians) in a recent talk. You don't have to understand the mathematics to get the core message, that complex new Mathematics needs to be mechanically verified to be trusted. And it turns out that the proof assistants used for this mechanical verification are within a whisker of being programming languages.

Indeed the functional programming world seems to be most aware of this expanding intersection of Mathematics and programming. This is partly because proof assistants can sometimes be used to prove programs correct. So it was the Sydney Functional Programming society that recently held a "coq fight", which involved the live competitive generation of proofs using the coq proof assistant.

Here's something that I expect will flow from this cross pollination of Mathematics and programming. There will be emphasis on the precise definition of the problem. Often in the programming side there won't be any need to do more, because a large proportion of current programming activity just involves doing obvious things to create the web site or other program type. Computers should do those obvious things for us. Once we get past that hurdle, then the human programmer can spend more time where real creativity is needed, either artistic or algorithmic. And the many situations where Mathematics can help us will be much more accessible because mathematics will be organized so that those prepared to understand it can easily apply it.

Tuesday, February 18, 2014

name change at marriage

It was nice to see that a lady we would all want to remember had a trophy named after her. The one niggle was that her name is given as the married name of her second rather brief marriage. And looking at the record of some of her achievements we see that her name has been changed, for consistency, to that married name. And, indeed, my wife is annoyed to find that her maiden name has been expunged from the record in the same way on one of her early achievements.

Women changing their name at marriage doesn't work well in the modern world. Here's an alternative scheme:

  • Women keep their last name at marriage, but change their middle name to their husband's surname. And similarly the man changes his middle name to the wife's surname.
  • Children are given the last name of the same sex parent, and start with the middle name of their other parent.
Note that middle name here means the last middle name. Other middle names are also possible though I wouldn't like to see that get out of control.

I did think of having the boys get their mum's lastname, and the girls get their dad's. However looking back to when I was a kid, even though I was closer to my mother, I know I would have wanted to have my dad's surname. I presume that most girls would want to have their mum's name.

Monday, January 20, 2014

Natural Gas is not a good step towards solving AGW

The Greens have discovered that a lot of anti-AGW activity is fake, being secretly funded by multi-billionaires and corporations with commercial interests in pumping out CO2 and other greenhouse gases. The way this is being written up is meant to give the impression that all the criticism of the Green movement is motivated by these base motives. I'd just like to say that, though I am persona non grata in various forums for my negative comments on Greens and AGW activity, I have never received any funding from any source. I'll list some of my criticisms below, but first let me make a new one:

The Green/AGW movement seems incorrectly content about the move from coal burning to natural gas burning. This, they say, will allow us to meet our initial goals of 5-20% reduction in CO2 emissions, on the road to the necessary eventual reduction of 80% and more. Yes it does make that first step easy, but then it acts as an impediment to any further steps. The reason is that it creates infrastructure for burning natural gas and that infrastructure can't easily be abandoned. I suppose the Green/AGW folk imagine that a higher carbon price, or other government action, will mean that the natural gas burning company will be unable to meet its costs, including interest payments on the construction cost, and this will force it to close. But this misunderstands how capitalism works. In an unregulated world what would happen is that the owner would go bankrupt. Then its assets would be sold. The buyer would then have a lower capital cost to fund with interest payments. And so the new owner can make electricity more cheaply without going broke. It is as if the infrastructure, being there, wants to be used. And the infrastructure is more than just the electricity plant: it includes gas drilling equipment, pipelines, various sorts of human expertise, and more. Of course this is just words. You would need a good model of the economy (and the technology and the geology and more) to really know how important my point is, but I don't get the impression the proponents of natural gas have even given this matter any thought.

Here are some other points I have made:

1. Modern nuclear power is the only chance for energy that is cheap and reliable enough to displace fossil fuels. Renewable energy (and CCS) are so implausible as a power source for modern civilization that they can only be regarded as a front for the fossil fuel industries. And what do we see when countries move away from nuclear power? Lots of talk about renewables and lots of quiet implementation of coal burning.

2. We need to try to understand the climate so that we can manage it. The Greens insist that we need to avoid effecting the natural world, and put up with the consequences. This puts the Greens firmly against the ordinary voter and makes it impossible to get countries like Canada and Russia onside. We need the world to be on the warm and wet side of natural to feed 10 billion people. In particular we need to expend more effort trying to understand what happened at the end of the previous interglacial when the sea level (a proxy for temperature) rose gradually throughout the interglacial, before plummeting at amazing speed for thousands of years at the end. It seems to me that one needs to worry about what happens when Arctic water warms up, increasing the amount of open water. This can lead to more snow on the surrounding land and one must consider how this might interact with natural events such as a period of weak sun (which we seem to be getting right now) and a big volcanic eruption (which has been recently shown to happen regularly without the need for additional input from the movement of magma).

3. Greens who aren't scientists often say that others shouldn't be allowed to have opinions on AGW because they aren't scientists (or aren't climate scientists). This makes everybody suspicious. Indeed it would be a good thing if we could get all the old socialists who have drifted over to the Greens to shut up.

Sunday, January 12, 2014

What is real?

I'm not that interested in Philosophy, but I just thought I'd write this down to get it out of the way. The following stuff seems obvious to me, but I suspect that many people will not agree with it.

Timothy Gowers' latest blog post ( discusses the fact that there is exactly one complete ordered field, namely the Real numbers. [You don't need to understand that stuff for the following discussion]. He points out two ways of describing a real number: (a) via an infinite decimal sequence; (b) via pairs of complementary sets of rational numbers (Dedekind cut). It will mystify non-mathematicians that these different things are said to be the same. The point is that they are exactly the same for all the practical purposes that we need the real line for, as is the description of a complete ordered field without even explicitly constructing the Reals. What we'll come back to is the question of whether the Real numbers really exist at all. First we move from Mathematics to Science.

The other day I was telling my grandson what a scientific theory is, namely it makes predictions which can, at least in principle, be falsified by observation. Two scientific theories are the same if they make the same predictions, even if the theories seem different. We have a recent example that illustrates this, in quantum theory. The original quantum theory was expressed in terms of infinite dimensional Complex vector spaces (yikes). The predictions then made met the most exacting standards of accuracy available, and continue to do so. Richard Feynman showed how one could instead draw collections of diagrams (the famous Feynman diagrams we often see in popular science) and use those to do calculations in a simpler and certainly more humanly accessible way. But these quickly get hard in more complex situations. Recently there has been a claim that we can portray quantum situations in a sophisticated mathematical context that allows calculations to be done much more efficiently. All these ways of describing reality make the same predictions, with varying degrees of computational efficiency and human accessibility. It is just not sensible, and certainly not Science, to say that one of them correctly describes reality while the others just happen to give the same answer. However human accessibility and ease of computation are key ways of evaluating theories, and what follows is an extreme example.

As we know, the Earth gives every indication of being 4.5 billion years old, and to have undergone massive changes during that time. Geology and related Sciences are all written as if that were the case. But suppose you lived in a society where you were likely to get burnt at the stake if you expressed the view that the world was more than 6.5 thousand years old. Then you might come up with an alternative theory: that God created the world 6.5 thousand years ago, exactly as if it had existed for 4.5 billion years and as if it had experienced many exciting events since that time. Maybe he ran a perfect simulation to see what it should be like. Now all scientific papers can still be written, but they have to have frequent weasel words so as not to suggest that the world really existed before 6.5 thousand years ago. The two theories are identical and lead to the same predictions and expectations. To claim that one theory is true and the other false is therefore not Science. However, like Feynman diagrams, the idea that the world really is 4.5 billion years old is the one that is more humanly accessible. Also the scientific community has a role in picking, from equivalent formulations, standard ways of describing scientific theories so that the scientific conversation can proceed smoothly. If the Geology literature was a mix of those describing a 4.5 billion year old world, and those describing a recent but equivalent creation, then the conversation would not proceed smoothly.

We can see that making specific choices from equivalent descriptions of scientific theories is very similar to specific representations of the Real numbers. It is relevant to look into the status of the Real numbers themselves. Our conception of the Real line plays an important role in our understanding of space, time and many other aspects of reality. However the Real numbers themselves have a rather tenuous connection to reality. We can easily ascribe a meaning to the Rational numbers (such as 2/3) and also to the computable numbers. A computable number is one where a computer program can (for example) give you as many decimal places as you request for that number. However there are only a countable number of computable numbers (so we could in principle give each one an integer index from 1 up), because there are only a countable number of programs (since programs are finite strings taken from a finite alphabet of symbols). And it is quite easy to prove that the Reals of the complete ordered field are not countable ('s_diagonal_argument). What can we make of these multiferous uncomputable Reals? The key point is that all Mathematics is done with finite strings taken from a finite alphabet. There are no infinities there. Yet hypothetical infinities are a recurring theme. The infinite things, such as uncomputable Real numbers, are a human construct that gives meaning to the Mathematics, which in turn enables us to understand reality.

In the early days of calculus, the practitioners (such as Newton) liked to talk about infinitesimal numbers. These sure make it easier to think about calculus. But there was a backlash against this, since "infinitesimal numbers don't really exist", and the whole edifice was reconstructed less elegantly using just the Reals. It was subsequently shown (Model Theory) that you can use infinitesimals in a rigorous way. In fact the reality of Real numbers is just as dubious as infinitesimals. They both enable human intuition to make sense of mathematics and then of the world.

Well it turns out that mathematicians are trying to build mathematics up from core principles in a new way: that respects the fact that Maths is done with finite strings from a finite alphabet; that takes seriously the question of when two things are equal. This is Homotopy Type Theory. I'm trying to understand it. I can also say that the Wombat Programming language ( makes extensive use of the idea of multiple representations of the same value, and also has a flexible notion of equality. Maybe when I understand HoTT I can make Wombat even better.