Tuesday, February 18, 2014

name change at marriage

It was nice to see that a lady we would all want to remember had a trophy named after her. The one niggle was that her name is given as the married name of her second rather brief marriage. And looking at the record of some of her achievements we see that her name has been changed, for consistency, to that married name. And, indeed, my wife is annoyed to find that her maiden name has been expunged from the record in the same way on one of her early achievements.

Women changing their name at marriage doesn't work well in the modern world. Here's an alternative scheme:

  • Women keep their last name at marriage, but change their middle name to their husband's surname. And similarly the man changes his middle name to the wife's surname.
  • Children are given the last name of the same sex parent, and start with the middle name of their other parent.
Note that middle name here means the last middle name. Other middle names are also possible though I wouldn't like to see that get out of control.

I did think of having the boys get their mum's lastname, and the girls get their dad's. However looking back to when I was a kid, even though I was closer to my mother, I know I would have wanted to have my dad's surname. I presume that most girls would want to have their mum's name.

Monday, January 20, 2014

Natural Gas is not a good step towards solving AGW

The Greens have discovered that a lot of anti-AGW activity is fake, being secretly funded by multi-billionaires and corporations with commercial interests in pumping out CO2 and other greenhouse gases. The way this is being written up is meant to give the impression that all the criticism of the Green movement is motivated by these base motives. I'd just like to say that, though I am persona non grata in various forums for my negative comments on Greens and AGW activity, I have never received any funding from any source. I'll list some of my criticisms below, but first let me make a new one:

The Green/AGW movement seems incorrectly content about the move from coal burning to natural gas burning. This, they say, will allow us to meet our initial goals of 5-20% reduction in CO2 emissions, on the road to the necessary eventual reduction of 80% and more. Yes it does make that first step easy, but then it acts as an impediment to any further steps. The reason is that it creates infrastructure for burning natural gas and that infrastructure can't easily be abandoned. I suppose the Green/AGW folk imagine that a higher carbon price, or other government action, will mean that the natural gas burning company will be unable to meet its costs, including interest payments on the construction cost, and this will force it to close. But this misunderstands how capitalism works. In an unregulated world what would happen is that the owner would go bankrupt. Then its assets would be sold. The buyer would then have a lower capital cost to fund with interest payments. And so the new owner can make electricity more cheaply without going broke. It is as if the infrastructure, being there, wants to be used. And the infrastructure is more than just the electricity plant: it includes gas drilling equipment, pipelines, various sorts of human expertise, and more. Of course this is just words. You would need a good model of the economy (and the technology and the geology and more) to really know how important my point is, but I don't get the impression the proponents of natural gas have even given this matter any thought.

Here are some other points I have made:

1. Modern nuclear power is the only chance for energy that is cheap and reliable enough to displace fossil fuels. Renewable energy (and CCS) are so implausible as a power source for modern civilization that they can only be regarded as a front for the fossil fuel industries. And what do we see when countries move away from nuclear power? Lots of talk about renewables and lots of quiet implementation of coal burning.

2. We need to try to understand the climate so that we can manage it. The Greens insist that we need to avoid effecting the natural world, and put up with the consequences. This puts the Greens firmly against the ordinary voter and makes it impossible to get countries like Canada and Russia onside. We need the world to be on the warm and wet side of natural to feed 10 billion people. In particular we need to expend more effort trying to understand what happened at the end of the previous interglacial when the sea level (a proxy for temperature) rose gradually throughout the interglacial, before plummeting at amazing speed for thousands of years at the end. It seems to me that one needs to worry about what happens when Arctic water warms up, increasing the amount of open water. This can lead to more snow on the surrounding land and one must consider how this might interact with natural events such as a period of weak sun (which we seem to be getting right now) and a big volcanic eruption (which has been recently shown to happen regularly without the need for additional input from the movement of magma).

3. Greens who aren't scientists often say that others shouldn't be allowed to have opinions on AGW because they aren't scientists (or aren't climate scientists). This makes everybody suspicious. Indeed it would be a good thing if we could get all the old socialists who have drifted over to the Greens to shut up.

Sunday, January 12, 2014

What is real?

I'm not that interested in Philosophy, but I just thought I'd write this down to get it out of the way. The following stuff seems obvious to me, but I suspect that many people will not agree with it.

Timothy Gowers' latest blog post (http://gowers.wordpress.com/2014/01/11/introduction-to-cambridge-ia-analysis-i-2014/) discusses the fact that there is exactly one complete ordered field, namely the Real numbers. [You don't need to understand that stuff for the following discussion]. He points out two ways of describing a real number: (a) via an infinite decimal sequence; (b) via pairs of complementary sets of rational numbers (Dedekind cut). It will mystify non-mathematicians that these different things are said to be the same. The point is that they are exactly the same for all the practical purposes that we need the real line for, as is the description of a complete ordered field without even explicitly constructing the Reals. What we'll come back to is the question of whether the Real numbers really exist at all. First we move from Mathematics to Science.

The other day I was telling my grandson what a scientific theory is, namely it makes predictions which can, at least in principle, be falsified by observation. Two scientific theories are the same if they make the same predictions, even if the theories seem different. We have a recent example that illustrates this, in quantum theory. The original quantum theory was expressed in terms of infinite dimensional Complex vector spaces (yikes). The predictions then made met the most exacting standards of accuracy available, and continue to do so. Richard Feynman showed how one could instead draw collections of diagrams (the famous Feynman diagrams we often see in popular science) and use those to do calculations in a simpler and certainly more humanly accessible way. But these quickly get hard in more complex situations. Recently there has been a claim that we can portray quantum situations in a sophisticated mathematical context that allows calculations to be done much more efficiently. All these ways of describing reality make the same predictions, with varying degrees of computational efficiency and human accessibility. It is just not sensible, and certainly not Science, to say that one of them correctly describes reality while the others just happen to give the same answer. However human accessibility and ease of computation are key ways of evaluating theories, and what follows is an extreme example.

As we know, the Earth gives every indication of being 4.5 billion years old, and to have undergone massive changes during that time. Geology and related Sciences are all written as if that were the case. But suppose you lived in a society where you were likely to get burnt at the stake if you expressed the view that the world was more than 6.5 thousand years old. Then you might come up with an alternative theory: that God created the world 6.5 thousand years ago, exactly as if it had existed for 4.5 billion years and as if it had experienced many exciting events since that time. Maybe he ran a perfect simulation to see what it should be like. Now all scientific papers can still be written, but they have to have frequent weasel words so as not to suggest that the world really existed before 6.5 thousand years ago. The two theories are identical and lead to the same predictions and expectations. To claim that one theory is true and the other false is therefore not Science. However, like Feynman diagrams, the idea that the world really is 4.5 billion years old is the one that is more humanly accessible. Also the scientific community has a role in picking, from equivalent formulations, standard ways of describing scientific theories so that the scientific conversation can proceed smoothly. If the Geology literature was a mix of those describing a 4.5 billion year old world, and those describing a recent but equivalent creation, then the conversation would not proceed smoothly.

We can see that making specific choices from equivalent descriptions of scientific theories is very similar to specific representations of the Real numbers. It is relevant to look into the status of the Real numbers themselves. Our conception of the Real line plays an important role in our understanding of space, time and many other aspects of reality. However the Real numbers themselves have a rather tenuous connection to reality. We can easily ascribe a meaning to the Rational numbers (such as 2/3) and also to the computable numbers. A computable number is one where a computer program can (for example) give you as many decimal places as you request for that number. However there are only a countable number of computable numbers (so we could in principle give each one an integer index from 1 up), because there are only a countable number of programs (since programs are finite strings taken from a finite alphabet of symbols). And it is quite easy to prove that the Reals of the complete ordered field are not countable (http://en.wikipedia.org/wiki/Cantor's_diagonal_argument). What can we make of these multiferous uncomputable Reals? The key point is that all Mathematics is done with finite strings taken from a finite alphabet. There are no infinities there. Yet hypothetical infinities are a recurring theme. The infinite things, such as uncomputable Real numbers, are a human construct that gives meaning to the Mathematics, which in turn enables us to understand reality.

In the early days of calculus, the practitioners (such as Newton) liked to talk about infinitesimal numbers. These sure make it easier to think about calculus. But there was a backlash against this, since "infinitesimal numbers don't really exist", and the whole edifice was reconstructed less elegantly using just the Reals. It was subsequently shown (Model Theory) that you can use infinitesimals in a rigorous way. In fact the reality of Real numbers is just as dubious as infinitesimals. They both enable human intuition to make sense of mathematics and then of the world.

Well it turns out that mathematicians are trying to build mathematics up from core principles in a new way: that respects the fact that Maths is done with finite strings from a finite alphabet; that takes seriously the question of when two things are equal. This is Homotopy Type Theory. I'm trying to understand it. I can also say that the Wombat Programming language (wombatlang.blogspot.com.au) makes extensive use of the idea of multiple representations of the same value, and also has a flexible notion of equality. Maybe when I understand HoTT I can make Wombat even better.

Monday, December 16, 2013

The Mathematics Game

The world mathematics community has become disenchanted with the system of peer review for academic journals, but is struggling to find a way to replace it. For the purposes of appointment and promotion they need a way to allow Mathematicians to be evaluated for their research and also their breadth and depth of knowledge. This is also important because clever young people love being able to show off their inventiveness: It is what leads many young people into Mathematics.

Rather than inventing a solution from scratch, let's take what we know works and add a little cryptographic magic. Here are some things that we know work:
  1. The Arxiv system for holding academic papers and for tracking changes to them.
  2. The mathoverflow system (and similar) for asking questions and for rating questions and answers and participants.
  3. Polymath style cooperative projects.
  4. Khan Academy and similar systems of self-paced learning.
  5. Repositories of knowledge such as Wikipedia and ncatlab.
  6. Math Olympiad and other competitions.
The proposal will have the following features:
  1. You can play the game at any level, starting with K-12 mathematics, and up to new research.
  2. Participant IDs are linked to unique real world individuals. However you can play with pseudonyms, then claim the credit if you do well, but never need to own up to mistakes. Reviews can also be pseudonymous, freeing the reviewer to be honest.
  3. Abusive pseudonyms can be unmasked. Subsequent pseudonyms by that user will, for some time, have an elevated "abuse level" that users and software can take into account. Fair but tough reviews need to be endorsed by others as non-abusive to prevent abuse of the abuse system.
The system runs on various sorts of points and the interactions between them. The stackexchange folk (who run mathoverflow and similar sites) are experts on this and their advice should be sought. A possible scheme might be:
  1. Mathcoins are earned in various ways (including doing moocs and accompanying tests), and can then be spent to allow the participant to attempt higher level actions which can allow them to move up on a more permanent basis.
  2. Achievement points are earned by well-regarded actions and they accumulate. This is rather like masterpoints in Bridge and encourages the enthusiast as much as the skillful. This is important because there will be lots of work (such as marking middle level participation) needed to keep the wheels ticking over.
  3. Levels are more like the rankings in Tennis, though more blurred. At the top it is the judgement of peers. At the bottom it is mostly automated. In between the judgement of people above is the key. Moving up the levels is the objective of the game, and hopefully those at the top become stars, though they can, if they like, hide behind a pseudonym.
To start with, every participant would have an authenticating public key. The player can then generate as many additional public keys as necessary to represent pseudonyms. The activities (Arxiv/etc) would need to be modified to support this (or replaced), including supporting the authentication of all actions.

The easy thing will be for the participant to link from pseudonym to themself (or other pseudonym). All that is needed is to generate a "certificate" claiming that the public keys represent the same person, and have the certificate be signed by both public keys.

There are lots of other things that need to be done. They can all be done with a Trusted 3rd Party solution. Many of them can be done more elegantly and securely with cryptographic cleverness. It can also sometimes be possible to divide information between trusted 3rd parties so that compromise of only one doesn't reveal important information.
  1. Proving that a pseudonymous identity has sufficient points/etc to participate in high level activities.
  2. Identifying the abusiveness status of pseudonyms without identifying the real participant.
  3. Transferring mathcoins to, from and between pseudonyms.
  4. ... and much more
I think that the idea of Mathematics as a vast interconnected system, with no insurmountable barriers from the bottom to the top, would be very powerful and productive.

Saturday, November 16, 2013

The key2key project

After retiring I mostly pursued my interests in peak oil and in computer network security. While at CSIRO I had published an Internet Draft on "Basic Internet Security Model". It is still online at http://tools.ietf.org/html/draft-smart-sec-model-00, though it expired long ago. Later I tried to build on that to create a secure Internet, in what I called the Key2key Project.

After the recent security issues on the Internet (brought to light by Snowden) I thought I should look into reviving it. However it doesn't seem like something where I am likely to make any headway, given the cool/hostile reaction to my `99 Internet Draft years ago. Anyway, for the record, here is the last, rather dated and very incomplete, key2key overview doc:

The Key2key Project

The end2end interest group created the ideas on scalability that led to the Internet. The aim of the key2key project is to extend this philosophical framework into the security area to create a secure overlay network.

A trusted system is one that can harm the truster. It may actually do harm if it fails in some way, or if the trust that was placed in it was misplaced.

Security is when you know which systems you trust, and explicitly agree to place that trust. We don't consider whether that is because the trusted systems are actually believed to be trustworthy, or just that the alternatives are believed to be worse. Food security is when you get to balance the risk that the food is poison against the risk of starvation. Food insecurity is when you are force fed. In the Internet today, security is not end-to-end. That is why Internet users are trusting intermediate hardware and software systems that they don't know exist.

This document covers the following areas:
  • Modelling Internet entities and sub-entities. This is a necessary step to understanding the problem.
  • Modelling cryptographic security technology: hashes, encryption, verification, signatures.
  • Modelling communication between entities. This will make it possible to define when a protocol is secure, and define a framework for building secure protocols. These secure protocols will be necessary for building our secure overlay network.
  • Modelling the common and crucial situation when one entity executes software "on behalf of" another (OBO).
  • A device for human signatures (DHS), and the implications of its limitations.
  • Delegating specified limited powers to sub-entities.
  • Securely booting a PC and setting it up as a sub-entity capable of representing the user on the network, and referring matters beyond its delegation up to the DHS.
  • A protocol for communication by "on behalf of" execution. It is intended to show eventually, but not in this document, that this is the only reasonable approach to this problem.
  • A simplistic e-commerce application will illustrate in detail how these components work together to make a secure system.

Entities and sub-entities

Distributed computing is very different when the computers involved are under the control of a single entity, compared with the case where the computers are controlled by separate entities. For the former the important issue is performance. The key2key project is all about the latter, communication between separate entities. In this case the main issue is security [footnote: However key2key can have good performance. Though the main control communication in key2key is often forced to follow potentially low performance routes, bulk data transfer is direct].

Legal entities (people and organizations) have sub-entities, such as employees and computer systems, which are not legal entities themselves, but can be given a delegation to act on behalf of the legal entity. Legal entities are not connected directly to the network. So in order to perform actions on the Internet they need to have some way to give a delegation to a computer system to act on their behalf. This can be quite informal, and the legal implications of the mechanism chosen will rarely be tested in court. In this document we will discuss well defined mechanisms which are appropriate as the basis for more serious interaction between legal entities via the network.

We want to get the communication between separate legal entities via the network onto a sound logical footing. It is important to understand that an individual acting with delegation as an employee is, for our purposes, entirely different from that individual acting as themselves. The fact that these two (sub)entities share the same brain gives rise to serious security issues. However this problem predates computing and networking. We aren't going to attempt to solve it, though it is useful to consider how well traditional legal approaches carry over into the network world.

Cryptographic technology

The key2key project relies on certain capabilities that are usually provided by cryptographic technologies, but can sometimes be provided in a simpler way by a trusted third party:
  • Secure hash (cryptographic checksum). This is a small fixed sized number, typically 256 bits, which uniquely determines some larger bit string. In key2key: end points are represented by the secure hash of a public key; immutable files are represented by the secure hash of the contents. The required characteristic is that there is vanishing probability that two bit strings will give the same hash; and it is computationally infeasible, if given a bit string to find a different bit string that hashes to the same result. This capability could be provided by a trusted 3rd party that remembered bit strings and returned a sequence number.
  • Encryption in key2key applications is used for access control of information that has to go via a 3rd party. Of course this often includes providers of network services. It is commonly the case that, if data is not completely public domain, it is easier to encrypt it than evaluate whether the 3rd parties who will see it are entitled to. Note that the important public keys in key2key are not used for encryption, only for signature verification. Encryption public keys are always separate and usually temporary.
  • The bulk of communication between key2key end points is verified by a temporary agreed shared key (whether or not the communication is encrypted). This means that each party knows the communication came from the other but doesn't allow them to prove that to a 3rd party.
  • Digital signing and verification is only used during the setup phase of communication, and for communications that the recipient wants to be able to prove to a 3rd party that they received. If clever algorithms based on sophisticated mathematics were to cease to be secure then a system using shared keys via a trusted third party would also be possible. Important long term public keys can use combined algorithms, and/or use multiple keys where the matching private keys are not held in one place.
Communication in key2key is between end-points identified by the hash of a public key. The first thing sent between the parties is the public key itself, which must hash to the identifying hash to be accepted. After that other cryptographic services and keys can be agreed between the end points.

Logical communication model

Each end-point is under the control of a legal entity (or in rare cases multiple entities, in some and-or tree structure [footnote: In the 'and' case all communication goes to each of the entities, and anything coming from it is approved by all. In the 'or' case communication goes to an unknown one of the entities and anything coming from it is approved by one of them.]). Initially the end points don't, by default, know what entity controls the other end. Often the initiating party will use a temporary public key just for that connection, and there may never be any call for the initiator to reveal who they are.

Two machines acting under common control might just move data back and forth according to some distributed computing algorithm that the owner has chosen to use. Communication between separate legal entities can only take place if it is meaningful. The agreed protocol must be able to be interpreted as a sequence of assertions and requests, in order for it to be possible to check if the protocol securely protects the interests of each party.

If end point 1 (EP1) sends the assertion "the sky is blue", then the receiving end can only infer and record the fact that "EP1 asserts that the sky is blue". Each end point keeps a store of beliefs and of business logic. When a request comes in, then the end point will effectively try to construct a proof that the request should be honoured.

End points can also send "out of band" hints to the other end. The correctness or otherwise of hints doesn't affect the trust in the main communication. One sort of hint will be about how to contact 3rd party keys mentioned in the communication. This might save a lookup in a directory, or it might actually be the only way for the recipient to get that information. Another sort of hint will be proposed proofs for the recipient. This is desirable because constructing proofs is inherently undecidable and the receiver of the request might be unwilling to invest the resources, and it might be more fair for the requester to do the work. This sort of hint might look something like this in English translation "Assuming your belief store holds 'trust bank public key about assertions of the form ...' and '...' then follow these steps ...".

Communication is between sub (or subsub) entities. Before events with real world significance (such as purchases) can take place, assertions about delegation may need to be exchanged, with a chain leading up to a key that is provably the key of the legal entity. However exchanges of real world significance can be anonymous on one or both sides, as in the real world when we go into a shop and pay cash.

"On Behalf Of" execution

We are familiar with the situation where we visit a web site like google or facebook or a poker server or an airline reservation site, and we perform actions which are carried out on our behalf on a computer that is not under our control. We might have an explicit or implicit legal contract, which might constrain how honestly or correctly the actions are carried out. But in general we have to assume that the requests we make will be handled in a way that suits the owner, not us, as we saw in the case of the cheating owner of a poker service, and in the case (some time ago) of a search for "linux" on MSN-India's search service, which returned linuxsucks.com as the first hit.

Other OBO cases we have a stronger expectation that the owner of the environment will honestly carry out the user's requests: when the owner provides a web hosting service, or a unix login service, or a container for isolated execution, or a virtual machine that the user seems to completely control.

Still in all these cases it hardly seems wise for the user of the service to transfer, to that environment, credentials which have power over significant amounts of money or other valuable property. Rather than trying to work out what credentials can be transferred and when, the key2key project takes an alternative approach: credentials are never transferred, but access to external resources is still possible from the OBO execution in exactly the circumstances where this is secure. More on this later.

Device for Human Signatures

We want to make it possible for real world legal entities to interact via the network. What is needed is a way to link people to the network in a way that makes legal sense. The proposed solution will work for an individual representing themself, or for an employee with some delegated ability to act for the employer. We don't consider the possibility of combining these in a single physical device.


The solution is a Device for Human Signature, DHS. The DHS requirements mean that it must be a separate device, not part of a more complex device. The proposed device has the following characteristics:
  • It has biometric authentication which is unchangeably linked to the owner.
  • It has a private key that is generated when first activated. Only the public key ever leaves the device.
  • It has a black and white screen and a mechanism for scrolling the image left-right and up-down.
  • It has a way that the owner can agree to sign what is displayed on the screen. This is such that it can't be done accidentally, nor can it be done without simultaneous biometric authentication.
  • There is another mechanism to clear the current image without signing it.
  • The device is connected to the world by wireless mechanisms and/or cable. If a cable is plugged in then it only uses that, which is desirable for signing things that have privacy restrictions. Either way it displays any offered image and, if signed, it sends the signature back on the reverse route.
The user signs the extended black and white image. She is not able to sign it till she has used the scroll control to view all of it.

The image will always be created, by a defined and public process, from information in a computer friendly format (such as XML). For example one of the known processes will be "English". The information in computer format, and the well know translation process will be sent with the signature of the text when it is used for internal computer purposes. For legal purposes only the actual visible text applies.

Any computer software can "understand" the signed text by using the conversion process on the computer friendly variant and checking that the resultant image is the one that the user signed. E.g. the user might sign "pay $1000 from my account 061234567 to Example Company (ABN 1234) account 0698765". What they actually sign is an array of black and white dots which has the appearance of this sentence. However the receiving computer (presumably the bank) doesn't have to understand the visual dots because such signed documents always come with an accompanying computer friendly structure which converts to the image in a well defined mechanical way. The signed document comes with an accompanying solution to the problem of determining its meaning.

It is important to sign a picture rather than "text", because it removes questions about how the text was rendered, and as we see it works just as well.

The signing device is only intended to be used for important things, or to create a temporary delegation to some more practical computer system which will sign as needed to act on the network within that delegation.

Delegating to sub-entities

For organizations delegating to employees or commercial network servers this is particularly important, and might be quite complicated, specifying what assertions and requests the delegate can make on behalf of the organization and what requests it will honour. This may not be practical for a person delegating to a computer system using the DHS: all the rules would have to be translated into English and read.

The form of delegation which will be initially implemented in key2key is a system of well known named delegation types. In particular the user will probably give his desktop system the "Standard Anonymous Desktop" delegation, which will enable the user to work anonymously on the network as we ordinarily do most of the time. When things come up so that the desktop system needs extra delegation, that will come up as a specific delegation request on the user's DHS.

Architecture of key2key end-point computers

The DHS doesn't remove the need for end-point systems, particularly desktop systems, to be secure. The standard techniques of managed code and sandboxing are crucial to allow applications to run without the need for them to be trusted with the crown jewels: the ability to use the private key to sign assertions and requests.

The traditional file system model of files that can be updated in place is inappropriate for the needs of key2key. Instead files are read only and identified by their hash, so that they are to a large extent self-verifying. The traditional unix updateable file is actually a form of simple database, and is handled in that way with appropriate security mechanisms shared with other network accessible databases.

This will also cover the secure execution of code that is only partly trusted, and of code that is executed on behalf of an external entity.

Desktop system: booting and running

To do anything useful, a user needs to book a desktop system. That system needs to be physically secure, and it should be booted from reliable read only media to place it in a predictable state. That system needs to generate a private and public key pair to allow it to operate on the network using key2key mechanisms.

When that is all done, the next problem is to use the DHS to associate the desktop with the user and appropriate delegation. The desktop will generate an appropriate message, sometimes incorporating user input to adjust the delegation (though normally additional delegation is added later). That message will appear on the desktop's screen, and be transferred to the DHS by wire or wireless mechanism. The DHS will offer that to the user to sign. It will be in the user's own language and will say something like: "I have securely booted on trusted hardware, and the key signature of that system is 123456789ABCEDEF. It is delegated to act for me on all services not requiring specific delegation.". This signed result will be returned to the desktop system and sent as an assertion wherever needed.

OBO execution model

Suppose that user X is running a shell on a remote computer owned and managed by Y, and a program tries to access a resource on a system owned by Z that X is allowed to access. The traditional approach is that X does something which reveal the credentials for accessing Z in a way that Y could easily take advantage of. X might type a password into the interactive session on Y, or might have transferred some cryptographic credentials, such as a a kerberos TGT or a private key to Y. This is wrong.

The key2key approach is that the request from Y's system to Z's system must use Y's credentials. Y will normally tell Z that this is on behalf of X, but this will only be used by Z to reduce its willingness to agree to the request. If Z won't execute the request using Y's credentials then Y can seek an alternative to make that request, and the natural and default alternative is to go back up the chain leading to the OBO execution. So in this simple case, Y will ask X to send the request to Z with X's credentials. And, of course, X is well placed to know if this is a request that naturally springs from the OBO execution on Y. If the execution of the request involves a bulk file transfer than that will go between Y and Z directly, and not be forced to go via X.

Illustrative e-commerce application

Payment by Reservation (PbR) is the key2key native e-commerce application. It associates accounts with keys. It handles need-to-know revelation of information about the end points: i.e. typically only when there is a conflict.

update on Peak Oil

I summarized my attitudes to Peak Oil in an (anonymous) contribution to the Azimuth discussion a couple of years ago, reproduced below. It seems right [except the dubious comment on shale oil was way off]. World economic growth continues to be constrained by the fact that we can only slowly change infrastructure. Fossil fuel use continues to grow as we go to gas and back to coal for many applications.
  1. The world peak is different to the peaks we’ve seen in individual countries and fields, because the price can now rise. This should mean that the tail is more stretched out as otherwise uneconomic fields come into play (like tar sands, very heavy oil, coal liquefaction, abandoned fields, and maybe even oil shales). Also it means that there is a lot of pressure to get off oil as much as possible. Even the Saudis don’t want to burn oil for electricity. Electric cars are coming for some uses. Nuclear powered commercial shipping may be economic. Etc. It will be many decades before we run out of oil for high value applications. However it seems that many of these changes are not starting soon enough and things will be bad for a while.
  2. On energy density: The fuel of choice for interstellar flight is anti-matter. Lots of energy goes into making the powerful lightweight batteries we use in portable stuff like mobile phones and laptops. The message is that, in a normal way, there is good value in using lots of stationary energy to produce much smaller amounts of dense energy for transportation or transportable applications. This fact has been suppressed by the availability of oil which was cheap and already dense. Now that we are looking into this problem we may remember Sheik Yamani’s (past Saudi Oil minister) quote “The stone age didn’t end because they ran out of stones”.
  3. Peak Oil is related to claims of Peak Fossil fuel. However it seems that exploration and development of gas and coal have been suppressed by the availability of the more convenient liquid form. The claims wrt coal are based on traditional extraction methods, but deeply buried coal can be accessed by underground coal gasification.
  4. A (possibly temporary) oil peak is happening now, with oil production unable to expand in response to price increases. It would be nice if we could get the economists who favour all possible effort to expand the economy (like Paul Krugman from NY Times) to respond to the question: “That would result in greater oil consumption. What if the world can’t pump that much more oil at the moment?”. It would also be nice if we could admit that growth is going to be limited for a while and have a rational discussion about how society should handle that fairly. E.g. an answer might be to get people working but not spending, with future financial security, by forcing them to take some of their income as “Energy Crisis Bonds” which will retain their value as a fraction of GDP, but not be spendable until enough of the massive infrastructure changes have been implemented..

Saturday, October 26, 2013

cheap energy created the anthropocene

At http://math.ucr.edu/home/baez/balsillie/ John Baez has slides from his recent talks on the characterization of climate change and what we will do about it. They are clearly thought out and presented, as always. Climate change is just one aspect of the anthropocene: the new era created by human activity. One of the todo actions is to leave fossil fuels in the ground.

The elephant in the room of this story is that our best chance to leave fossil fuels in the ground is to find cheaper energy and the only realistic chance of that lies in developing nuclear power. The trouble is that the anthropocene has arisen from cheap energy. It lets us destroy habitats, destroy fish stocks, and much more. Cheaper energy will make this worse, even if it fixes the CO2 problem. The answer will lie in extending the 19th/20th century idea of a national park, to create an international park which is a substantial subset of the biosphere. The other side of the coin is the human conquest of space. There are many lifeless worlds out there, just waiting for us to make them more interesting.

Friday, October 25, 2013

scientific errors in medicine

The obesity-health saga is an interesting example of scientific error. There is a clear correlation between being overweight and having various health problems, particularly Type 2 diabetes. So nobody looked more closely at that. Everyone is advised to lose weight.

But then they did look more closely and low and behold we find: For most people, carrying extra weight is actually protective. At any given level of fitness it is better to have more weight.

So why is carrying weight associated with disease? The answer is that most people who are fit are relatively slim because it is hard to keep the weight on if you get fit. So being overweight is correlated with lack of fitness, and that is the problem. If you can be fit and keep the weight, with a high proportion of muscle, then that is ideal.

Medical science seems particularly prone to jump to conclusions based on correlation alone, but it is an easy mistake in many disciplines. Medical science is pretty good at other sorts of errors, including pure guesswork, like "eating fat makes you fat" (most people lose weight on a high fat low carb diet); or eating cholesterol will increase your cholesterol levels leading to heart disease.

Saturday, September 15, 2012

recent developments in Climate Change

We see that Russia and Canada, the big beneficiaries of global warming, are inclined to drag their feet. Note that this is a case where voters don't want to hear the politicians say "We plan to be bad". Instead they want to hear the politicians denying global warming. This is lying for the people, not lying to the people.

A recent scientific study showed how, hundreds of millions of years ago, there was a land-locked ocean over the North Pole. That lead to a massive decline in atmospheric CO2. That is nothing like the current situation, but it does show how feedbacks of warming can lead to reducing CO2 levels. Such negative feedbacks are likely to overshoot. For example if a warm ice-free Arctic does lead to something that reduces CO2 levels then it is likely to continue to do so for much longer than we would wish.

Another reason why I think global warming could lead to cooling is the history of recent inter-glacials. They have been warmer than this one, but have not lead to runaway warming when the Arctic melts. Instead each one spikes and then crashes:


Here's my graph of the global warming position:

We know there is a nice stable ice age waiting for us on the left. Many reckon that if we keep pushing up we will go over a bump down into a very warm climate to the right. I reckon that if we suddenly stop pushing that ball it might roll down and roll over the edge to the left. However this will only happen if we find an energy source cheaper than coal.

I also reckon that it is simplistic to imagine this 2d picture. Suppose we imagine this 2d graph embedded in a bigger 3d graph. We might role over the top to the right, but then get caught in a channel that leads back around to that waiting ice age.

In summary I want to see more science before we do any geo-engineering: including attempts to get CO2 levels all the way back to the "natural" 280ppm.

Sunday, June 24, 2012

Mathbabe needs a job

We need a 4th arm of government, which has substantial investigative powers and financial resources to vigorously, impartially and openly investigate the facts which are relevant to correct decision making in the other arms of government. I wish I could create that organization and put you in charge of it.

Only slightly more realistically: As the newspaper business dies, we are left without a mechanism for funding private investigative activities. I have this idea to create a market for investigations. People will propose or support investigations with offers of approximate financial commitment. Journalists, scientists, data analysts, others, will try to put together teams to put more concrete proposals that the informal proposers and others can fund. Willingness to fund would be very much based on the reputations of the investigators. So I envisage that young people would get together to do free or cheap investigations to establish their reputation, as open source programmers do. To set up such an investigation market, it would be valuable to have some high profile people getting it started: including a scientist, a journalist and a data analyst.

Sunday, May 13, 2012

The Wombat hasn't landed

The Wombat Programming Language

Programming languages have been annoying me for over 35 years. Still it's not so easy to get them right. I've got a fair way with the design of the Wombat Programming Language, but I can't get the interface-like part of it (Behaviour) working well. So rather than wait for perfection I thought I'd put it out there and see if anyone is interested in helping me with it.

I know my design would be better if I understood more of: Scalaz; Typeclassopedia; HoTT; and lots of other stuff. But maybe the people who do would like to comment (at least to the extent of recommending what to learn and where).

The "Wombat Summary and Rationale" document is at https://docs.google.com/document/d/1MXH4y75gViHDTldrAhXVMVUOXZJhwh3EwjXgn6k5Ncs/edit, with comments enabled. Or you can comment here. Or in my Google+ post. Or post an issue at http://wombatlang.googlecode.com.

Thursday, March 22, 2012

STV Magic

Australia's democratic system is superior for two important reasons: Single Transferable Vote; and compulsory voting. The latter is more important: voting is a duty not a right; and if voting is not compulsory then somehow the people in power manage to make it easier for some to vote and harder for others. However this little idea is about STV.

[For those that don't know: In STV people number the candidates. The candidate with the lowest total of votes is eliminated in each round and all of the people who voted for the eliminated candidate have their votes transferred to their next choice. Repeat. Finally there are 2 candidates and the one with more votes wins and it is a nice feature that he has, at that point, got more than 50% of the votes. There are subtleties, but one thing is sure: it's better than first past the post. I was shocked, disappointed and annoyed (not to mention disgusted) when the British people rejected STV in a recent referendum.]

Well here's an idea. First we run our STV election to get the first winner (we just go till someone has >50% of the votes). Then we eliminate all the candidates of the same sex as that winner, and we recount the election until all the votes have been distributed to the last two candidates (except that we never eliminate our initial winner in any round [if that could happen?]). So we end up with two candidates and all the votes allocated to one or the other. Then we send them both off to parliament: except that when they vote in parliament they vote that total number of votes they got: it is no longer just one person one vote in the House of parliament.

This has a couple of nice effects. Naturally we get an equal number of the sexes in parliament. Another thing is that it makes gerrymanders useless.

Assume there are two big parties. Each will have two candidates, one of each sex. At the end of counting you get one candidate from each party elected after the minor parties and independents are eliminated. But the total number of votes each party gets will represent their total votes across the whole country, irrespective of where the boundaries are drawn. Indeed one of the advantages is that it reduces the need for the electoral commissioner to draw artificial boundaries to equalize the electorates.

Monday, March 12, 2012

Mysterious arguments about AGW

There are two interacting climate change hypotheses. The well known one (AGW) is that human induced increases in atmospheric CO2 are increasingly making it warmer. The other is that when the sun is weak it allows more cosmic rays to get to lower levels of the atmosphere, increasing cloud cover and making it cooler. There is good reason to think we are moving into this weak sun period.

The two sides of the AGW debate have a clear interest in the 2nd question. For people that want something done about CO2, it is important to let people know that if we enter a period where natural climate variability runs against AGW then we must not relax, because soon they'll be pushing in the same direction and by then it will be too late to reduce CO2. On the other side of the AGW debate, those who don't want anything done about CO2 should keep quiet about natural climate change so that they can claim that the temperature falling (or not rising) disproves AGW.

But what we see is the reverse. AGW proponents like to deny that there is any such thing as natural climate change. Meanwhile those on the other side seem equally keen to argue that there is natural climate change.

Go figure.

Saturday, September 17, 2011

The paradox of the economic impact of rising production costs

In my previous post I claimed that the impact of Peak Oil arose from the change of production cost of oil, rather than the change in price. I still think that is correct. However the detail of how it was expressed was wrong.
In the case where the price rises with no production change, there is just a transfer of money and consumption from buyers to sellers with no net economic change. But then I said that when there is a change of production cost then the extra cost of production disappears out of the economy. That is clearly wrong. Money flows to the producers, but instead of flowing on to consumer purchases it flows on to production expenses, like oil rigs and oil workers. But in so far as it flows to more oil workers it is no different to when it flowed to the owners of the oil resevoir. Similarly when it flows to the workers and owners making oil rigs.
Either way the money flows through the oil production process. So what difference does it make to the economy whether it flows through to owner consumption, or it flows through to production costs?
Obviously I think it does make a big difference, though not in the simplistic terms of the previous post. My intuition is that it flows through with more resistance when it flows through to production costs instead of owner consumption.
So my challenge to economic theorists is to find a way of talking about the economics of resource, and particularly energy, production and use, that correctly explains the impact of rising production costs on the economy. If it is only comprehensible to mathematicians that will be better than nothing, but bonus marks will be awarded if it is comprehensible to politicians and voters (and me).


Friday, August 19, 2011

How Peak Oil destroys the world economy

Having expected Peak Oil to seriously impact the world economy, I was not surprised when high oil prices were followed closely by economic problems in 2008. Surely now the world would wake up. But no, the economic problems were attributed to malfunctions in the the financial system. And we Peak Oil believers struggled to put a coherent case.

It is tempting to focus on the high price, but consider what happened when the price went rapidly from $130/barrel to $140/barrel in 2008. Obviously this made consumers, and importing nations, poorer. But it made producers richer by an exactly equal amount, and what can they do with that windfall but spend it, or recycle it by lending it to others to spend? So the net effect on the total world economy is zero, and that is what economists perceive.

Even though the price is now around $100/barrel, as it was in 2008, still most oil comes from old fields that used to profitably produce at $20/barrel, and still could. However all of these big cheap oilfields are past their peak. They produce less every year. Oil production continues to struggle along on a plateau of constant production. But this is made up of increasingly less of the cheap oil, and increasingly more of the expensive oil from newer fields.

Now imagine what happens when we lose a barrel of oil that cost $10 to produce, and replace it with a barrel that costs $60 to produce. That is $50 worth of productive effort that was available for consumption or building new infrastructure. Now that $50 of productive effort is used and lost.

The world consumes about 30 billion barrels per year. The depletion rate on existing oil wells is about 5%. As a rough guess, perhaps 1.5 billion barrels/year (5% of 30 billion) is being replaced by much more expensive barrels each year. In that case, multiplying by $50, it amounts to $75 billion/year lost to the world economy. Perhaps Kjell Aleklett and his team can come up with the right number. Whatever it is it is certainly big enough to cause major disruption to our growth-oriented economy.

So far the economy has managed to grow by: switching to energy alternatives; using petrol more efficiently; and turning to less energy intensive ways of working and playing. This is going to get harder as we come off the plateau and start the actual descent.

Wednesday, April 20, 2011

Demographics

Chatham House has an article on Demographics which set me thinking. How important is the ageing population for its effect on the environment (e.g. global warming)? Old people consume but don't produce. However governments see full employment as a key goal. When an old person dies that reduces consumption. But the matching fall in production doesn't happen. Either automatic processes or government action restores production (making everyone else richer).

Of course this will be irrelevant in many possible future circumstances where production is limited by energy not labour shortages. In that case population at any age is irrelevant to pollution.

Monday, April 18, 2011

global warming impacts viticulture: irrelevant


Viticulture! I can't believe the way people who want the world to stop burning fossil fuel think it is some sort of minor matter, so that it is worth mentioning all sorts of trivia like the impact on tourism here or increased disease there. If we stopped burning fossil and nuclear fuel tomorrow (as the lunatic fringe wants) then the carrying capacity of the world would be much less than a billion people. At least they'd all be living close to nature: too close for comfort.

IF we find something cheaper than coal for energy (our only chance to stop burning coal) then just the cost of changing infrastructure will be huge and mostly hit the poor. How much are people prepared to pay to save the world? A major Australian mining union has said "not one job". America has no interest in growing food instead of transport fuel despite the visible impact of rising food prices on world stability. Just the minor rise in energy costs, from initial (ill-advised) renewable subsidies and requirements, has annoyed the Australian electorate enough that the latest polls suggest the government will be wiped out in the next election, if they make it that far.

The world needs cheap energy. It can only come from Nuclear. Nuclear design is a nightmare since elements keep changing and having different chemical and physical characteristics. And clearly we need to prove safety, since "just trust us" isn't going to work. The way to prove safety is to make lots of identical small reactors so that we can test them in extreme conditions [on a remote island]. And this needs to be done in a very open way. "OK, here's our simulation of what will happen when we do this. Now let's actually do it, and everyone can watch live on the Internet".

sorry about the rant.

[response to Azimuth project discussion entry]

Monday, March 21, 2011

Make maths useful: response to Baez blog post

For a decade and more my wife complained about word processing people, and I kept telling her: you'll never get documents created as you want in a timely fashion unless you do it yourself. Programming is a bit like that too: and spreadsheet programming shows that people want to program if the environment is accessible enough. It is nice when sophisticated mathematics makes a difference. It is nice when real mathematicians can cooperate with people in other fields and use semi-sophisticated mathematics to help them. Yet I feel that the real need is for everybody to understand unsophisticated mathematics enough so that when they have a problem amenable to mathematical treatment they at least recognise that and look for help. The Internet seems a wonderful tool for educating people about mathematics, but I feel there won't be much progress until the educational authorities stop regarding mathematics as an optional skill. At any rate one of the issues seems to be that mathematicians love to generalize, but the implications of the general theorems don't percolate down. I remember an memoir by a famous mathematician (perhaps Arnold) complaining about papers published on how to solve particular types of PDE, when they were just special cases of a "well known" general theorem. Unless mathematicians are also involved in the real world, or at least the less unreal world, then they won't know how their understanding can make the difference it should.
If plants keep their leaf pores smaller, that means that particular plant types can grow in drier conditions. At the edge (if rainfall stays the same) that means more tree covered areas which is a negative feedback on CO2. In other places I presume it means that you eventually get different trees (basically because trees which are adapted to relatively dry conditions lose out to trees that don't have those adaptations where those adaptations aren't necessary). So its tricky. And isn't this typical of so much relevant stuff. You see a summary of a result and you wonder: "did they allow for this or that?". It would be interesting to develop an oracle (like Watson?) which could absorb a lot of information and answer questions like that.

Saturday, February 26, 2011

Democracy and the war on drugs

The Arab world wants democracy. It seems like an unstoppable force. This is perhaps unrealistic, because the end of cheap oil is making everything expensive and democratic regimes are also going to feel that. But at least the people have the safety valve of the ballot. This would work better if terms were shorter, like Australia’s 3 years and America’s 2 (for the House of Reps).
But many countries are going backwards because of the corrupting influence of drug money: most noticeably Afghanistan and Mexico. It is essential to take the profit out of selling drugs, and there is a natural way to do that. Drug pushers start by providing drugs cheaply, then make their profit from those subsequently addicted. Making it illegal for addicts to get there drugs is highly counterproductive. Here’s the alternative:
  • Continue to ban the supply and sharing of drugs, with severe penalties particularly to the supplier, but also to anyone receiving or possessing;
  • Allow addicts to register;
  • Registered addicts can receive supply, for their own use only, from a government channel;
  • Supply is guaranteed, not requiring immediate payment, so addicts are not forced into crime;
  • For more dangerous drugs, users might be required to consume under supervision.
This will remove the financial incentive for drug crime. Rich addicts will still get their drugs illegally, but the big market for poor addicts will be largely eliminated. The cost saving will be immense:
  • End the war on drugs;
  • Remove the malign influence of drug barons on politics in many places;
  • Getting into contact with addicts in this way will enable cures in many more cases.

Friday, January 14, 2011

thinking about interconnected information

Natural language is designed to be a good way to represent internal mental states. And internal mental states are where we exploit the brain's amazing capabilities to do parallel search for interconnections. So natural language has to be at the core of communication of clear thought. However when you get a real lot of natural language, like a large text book, I wonder how easy it is to get that into a good internal brain structure.
Anyway this set me wondering whether one might try to copy the brains internal structures a bit. The idea is to have nodes that are connected in multiple ways and amenable to computer processing. The text is unambiguous (as far as possible) because the ontology and parsing is specified. Nodes can link to other nodes in various ways, including:
  • (parameterized) Bayesian network specifying the probability of a node given another (when meaningful); 
  • software module interaction for nodes with associated software; 
  • just links; 
  • ... 
The hope would be that you could put in a statement (like the economics one given partially above) and it would search around, find other relevant stuff, find data which might bear on the matter, code that might let you do relevant calculations on the data, and other useful stuff. This would be linked to information relevant to the individual. Individuals can specify how much they understand nodes, how much they agree with them. If you want to understand something new then it would lead you through other stuff you need to understand first. And it could do lots of other useful things to help you understand the subject...

Monday, December 27, 2010

Australian Cricket mismanaged

Australia is never going to be as successful at sport as we used to be. The rest of the world is more interested and with more opportunity. Meanwhile our interests have switched to more sedentary activities. Still the current state of Australian Cricket (hitting an amazing low on the first day of the MCG test) seems to be significantly caused by mismanagement.

Ricky Ponting showed why he is such a bad and unsuccessful captain at the start of the England innings. He held a huddle to gee everyone up. Richie Benaud was immediately critical. It was essential that the bowlers stayed calm and put the ball in the right spot. Instead they tried too hard and immediately sprayed the ball around. We can well imagine that this has been a feature of Ponting's captaincy. It is also bad for the batsmen. We need them to bat a long time, and you can't do that if you are trying too hard. We need batsmen who can still focus while staying calm.

Some of the coaching staff do know what they're doing, as we saw when they took Mitchell Johnson out for one test and sorted him out for Perth. Still we need to restore calmness in the coaching department. Let's start by making Mark Taylor head coach.

The tendency of players to play on and on has destroyed the natural rhythm of the team making it too hard to bring in young players. This also involves players playing with injuries. That might work in a 1 day game, but you can't bowl long spells or bat for a long time with a crook back. Shane Warne showed what a break from the game can achieve. I'd like to see Ponting take a big break. He might bat on for many years after that.

I fancy Cameron White for Australian captain, why else would Warwick Armstrong be reincarnated? However there are a few things against it, including the fact that he is probably not good enough to make the team. I think Shane Watson could be good, though he is another one who is not as calm as he seems.

Sunday, December 19, 2010

Mathematics is "Thinking clearly about problems"

Robert Krulwich's NPR blog has comment on a wonderful Vi Hart video: http://www.npr.org/blogs/krulwich/2010/12/16/132050207/this-is-for-the-i-hate-math-crowd-not-after-this-you-won-t. However it (and Vi Hart) are misguided about what is needed to improve maths education. We don't need to provide more stimulation for people for whom maths is (or might be) a recreational/cultural activity. What we need to do is make teacher and student appreciate the importance of mathematics for problem solving in every field. This is my comment on their blog:
The subject matter of Mathematics is "Thinking clearly about problems" (not counting most problems related to understanding and relating to human behaviour and culture). Teachers can't teach maths well without having this focus. It isn't (mostly, and for most people) a cultural activity like music. Math tends to invent a terse language to help express itself, but teaching the language without clearly relating it to problem solving is what makes math seem weird and pointless to many students.
If we could base mathematics education on this definition then we would see many immediate benefits:
  • Teachers and students would know why they were learning mathematics;
  • A problem based approach would help everyone see the difference between the important and the merely conventional aspects of the language and methods of mathematics;
  • It would be clear why mathematics should be compulsory, and why efficacy should be a key requirement for higher education courses (outside the Humanities);
  • It would integrate mathematics with computer education to the benefit of both.
To make the definition comprehensible it is important to tell teachers and students how mathematics supports understanding data of all sorts (using probability and statistics); how the real world (and hence engineering) is only clearly understood using mathematics; how computer programming is becoming a mathematical science instead of a black art.

Update:

In John Baez's blog I appended this to a comment I made:
My New Year’s resolution is to have another go to sell the idea that “The subject matter of Mathematics is how to think clearly about problems (mostly excluding human interaction issues like culture)”. Teachers and students are hopelessly confused by an education system that treats mathematics as a collection of facts (about Platonic entities) which is sometimes useful in the real world. My definition will give Mathematics its rightful place in the core of a modern education. I’m not going to make any progress until I can find a real Mathematician to endorse the idea.
And I got an endorsement from John Baez himself. Initially his comment was (as wordpress emailed it to me): "I hereby endorse your idea. Please make progress.". But now the reply reads:
I hereby endorse your idea. 
When I go back to UC Riverside in the fall of 2012 and start teaching math again, I’m going to teach it in a new way, informed by everything we’ve been discussing on this blog. I think the kids will enjoy it. I never taught math as a collection of ‘facts’, and that’s probably why the students liked my classes, but now I’m more keen on real-world examples that illustrate the big problems facing our civilization, rather than examples of the sort that pure mathematicians (like my former self) most enjoy. 
Sometime before that, I plan to write a paper with the mild-mannered title “How Mathematicians Can Save the Planet”. I’ll put drafts here, and I’d appreciate your comments.
I'll continue this subject area in a new post soon.