Tuesday, August 20, 2019

Maths for better Batting

If we look at the bowler's delivery, then at any time the ball is at a specific point and travelling in a specific direction in 3D space. Let's imagine an arrow in space starting at the position of the ball and going in the direction of the ball at that point in time. (This is called a tangent vector of our moving point).

Now imagine the bat moving through space. We'll start by considering the line in the middle of the bat. At any moment in time our bat gives us a line in space, and the motion of the bat defines it's direction. There's a bit of subtlety here, but for our purposes we can pick out a flat plane that the bat is moving in at a particular moment (a tangent plane). When we add the width of the bat, then we get a thickened plane that the bat is moving in.

Now consider the moment when the ball meets the bat. If the ball's arrow is moving across that thickened plane from one side to the other, then any error by the batsmen will result in a miss or an edge. If, at the other extreme, the arrow for the ball is wholly within the plane then an error by the batsman will just mean that the ball hits higher or lower on the bat.

Consider, for example, the sweep shot where the batter uses a horizontal bat to hit the ball close to where it pitches so he doesn't have to worry about the spin. Looks great when it works, but it fails catastrophically. The alternative is to play with an angled bat pointing to the point where the ball pitches, and with the angle of the bat being the angle that the ball bounces up. Now the ball is staying in the bat's thickened plane and though it looks awkward it has a much higher margin for error.

This is harder with leg spin with the ball moving away. Then you have to angle the bat with the handle more away from you than the blade. But if you get the bat handle in front and the blade behind and back cut, then suddenly all is good. Indeed when the batter gets in a muddle, and is forced to back cut the ball to stop it hitting the stumps, they often find that surprisingly easy. It would be exciting to see a batsmen practice this and then do it deliberately and repeatedly.

When the ball is spinning (or swinging) in, then this theory recommends hitting into the spin with a straight bat. I think that is best for defensive shots. Alternatively if attacking then an angled bat hitting to the leg side is your best chance to hit the ball with the balls tangent vector within the bat's thickened plan. This is the slog that even weaker players often succeed with. Not just luck after all.

Sunday, August 18, 2019

Safer red ball cricket

We see that pitches for red ball cricket (3, 4 or 5 days) are prepared that are quite dangerous, to increase the chance of a result. I have an alternative solution.

Wickets fall at a very random rate. Runs are scored at a more even rate. So, instead of having a target of more runs in 20 wickets, the plan is to a target of least wickets to fall to score a set number of runs. You can have as many innings as needed to get those runs.

For example suppose the target for a test match is 500 runs. Then the two teams alternate innings (an innings always counts as 10 wickets) till one team has 500 or more runs. If it is the team batting 2nd then they win because the other team has already lost more wickets. If it is the team batting first that first exceeds 500 runs, then the team batting 2nd knows how many wickets they can afford to lose before they get to 500 to win. Ties will be more common, so maybe have a tie-breaker system, but I don't have a problem with ties.

One nice thing about this scheme is that you can play the game to a finish, however long that takes with rain, without the dangers of matches going for a long time, as used to happen before WWII. Let's have more ties and no draws.

The main objective is that pitches be prepared that don't have the ball rising sharply from just short of a length.

This can be combined with another idea I like: Let the captains make a bid (in runs) for the right to decide who bats first. The side losing the auction starts their innings with the runs they bid.

Wednesday, May 29, 2019

Nuclear for Coal to Oil in Australia

Nuclear for Coal to Oil in Australia

Australia lacks oil reserves, and this is a security issue. A solution is the conversion of coal to oil. The carbon atoms in that oil do not lead to extra CO2 emissions. Oil from elsewhere would otherwise be used.

However the process of converting the coal to oil requires a lot of process heat, which is traditionally provided by burning half of the coal. However to do that would inevitably breach Australia's commitments to reduce greenhouse gas emissions.

The alternative is to use nuclear power. The conversion process is cheaper and more efficient if done at very high temperatures, over 900 degrees C. It turns out that there are modern passively safe reactor designs which provide that level of industrial process heat.

Doing this will be politically very favourable for the current government:

  • Every voter can understand the need for secure access to oil for transport fuels.
  • Many on the Left now understand the need for nuclear power to reduce CO2 emissions, so the introduction of nuclear power will wedge the opposition.
  • Indonesia is making tentative steps towards nuclear power, and many Australians will think that is a good reason for us to do the same.

There are various options to site nuclear power far from most voters. Perhaps the Bunda Cliffs on the southern edge of the Nullabor, which is close to water, but high above a low risk coastline.

Monday, May 27, 2019

Boom, Bust and DNA

Imagine it is boom time for a group of humans. There's lots of food, and time for recreational and romantic activities. What behaviour will favour our genes?

There's no reason to worry about competition from strangers. And strangers are attractive. We plan to have 10 children and 100 grandchildren. If we mix in some slightly different genes then we might produce some offspring that combine good genes of ours with their good genes and make individuals with an advantage. So we're tolerant of strangers, and looking to have offspring with more than one partner.

Now imagine things are bad. Life is a struggle. The population is falling. What behaviour favours our genes now?

People that are like us share more of our genes. People who are different probably don't. And they're competing with us for the limited resources. Maybe we should cooperate with similar people and make sure we get our share relative to those who are different. And note that deaths from fighting are less of a loss to our DNA because deaths are common and it at least leaves more resources for other copies of our DNA. We now hope to have 2 children and 4 grandchildren, or maybe less. And it makes sense to marry our 2nd cousin, or even our 1st cousin.

Wednesday, May 8, 2019

Obvious Things: Nuclear Power for Climate

There are a lot of problems with relying on renewables to cut CO2 production:

But we come back to the fact that the voters hate nuclear power. 

We are currently addressing this by exporting high energy activities (manufacturing) to non-democratic places, i.e. China. But despots have their own fears, and also don't like to annoy their citizens too much. So this may not continue to work. And the rise of robots means that manufacturing may become more uniformly distributed.

The solution has to be:
  • We tell the electorate that we have to do nuclear to address the climate emergency.
  • We are going to do it in a safely remote location and use it to make liquid fuels for transport (hydrogen would be good, ammonia is ok) and/or pump the electricity a long way.
  • We are going to be very open about the planning. No secret stupidity like Fukishima.
And then let's throw a lot of money and expertise at it. Not in the Chinese way where all the eggs are in one basket. Multiple large competing projects. Plus let's start building reactors that are known to work just in case none of the advanced plans work out.

We've been asleep at the wheel. Time to get moving.

Tuesday, May 7, 2019

Obvious Things: Type 2 diabetes

We know that type 2 diabetes is associated with processed food.

We know that type 2 diabetes is associated with the top section of the gut. How do we know this? A treatment for obesity is to do surgery that bypasses that part of the gut. The intention was just to reduce the total size of the gut. But a miracle occurs: if the patient has type 2 diabetes (raised blood sugar), then they are instantly cured! No need to wait for them to lose weight.

So what is the role of the top section of the gut in a normal human primitive diet?

Normally primitive food will come in with the cell walls intact. So it is obvious that the first job, the job done by the top section of the gut, will be to deal with those cell walls. And it will be no surprise if it expects to see cell walls, and uses them to self regulate, and fails to function correctly without them.

And, of course, the characteristic property of processed food is that the cell walls have already been destroyed by industrial processes.

This ain't rocket science. (Whatever happened to "ain't").

Wednesday, April 24, 2019

AutoParliament

AutoParliament
I'd like to support the XR (Extinction Rebellion) movement, but in a way that pushes back against the anti-democratic forces that are keen to take it over (as in this video: https://youtu.be/haGLhlLDCUw -- plans for dismantling our system of government at the end). So I thought I'd create a plan for participatory democracy. Before I start, here is a response to some obvious objections that will come up:
  • "It is too complex". I don't think a simpler plan will work.
  • "It is too confrontational". I think it is crucial to confront the people with bad or misguided motives and sideline them. We can't save the world without confrontation.
  • "It is too technical". I have to admit that the system is designed to attract the technically knowledgeable. We see XR people making totally unrealistic plans. I think the non-technical can join if working in groups with technical support. There are no secrets.
  • "It can't decide anything". That remains to be seen. But what it can determine from the beginning is a consensus view of the relevant facts, and particularly the costs and benefits of various practical actions.
  • "Without anonymity it puts participants at risk to reactionary forces". Absolutely. Participating in this will not be safer than blocking roads and getting arrested. It simply can't be anonymous and function correctly.

Infrastructure

Everything that happens in the AutoParliament is recorded in a replicated log (like a blockchain, but without the massive computing), which can't be changed without the extreme action of going back to an earlier version of the log and replaying some but not all transactions. Access to the log and to updates is available to all, so even if the official servers hosting the log conspire to change it in this way, copies of the log as it was can exist.
Participants have to be associated with public keys to sign their actions. This has to integrated with 2-factor authentication (as in https://en.wikipedia.org/wiki/Universal_2nd_Factor).
The Internet is an essential requirement. But indeed it is essential for the XR movement as well. This is a serious single point of failure. There is an urgent need for a decentralized backup Internet. This shouldn't be too hard, as the Internet is explicitly designed for decentralized operation. I won't address this in this document, as it demands its own independent planning.

Joining

Anyone can join. To join one attends a local meeting of other members, and does the following:
  • Make a video recording where you say your name, approximate address (postcode), say that you are a citizen of [specify country] and the world, and promise to use your membership to address the world's environmental problems.
  • Get (buy) a 2-factor authentication device.
  • Create a record containing the video, the unique id (public key) of the 2-factor device, and the person's own public key, and maybe more.
  • Sign this record with that public key (thus proving the ownership of it).
The signed record then gets added to the replicated log, and the member has joined. Ideally automatic recognition software will quickly discover people joining more than once. I doubt if anyone will attempt it.

Basic Operation

The basic operation of the AutoParliament is a combination of wikipedia and git. There are bills and amendments. Members can vote for or against or abstain for any bills, and can change their vote at any time. If you vote for a bill and for an amendment to that bill, it means you support either variant. If you vote for an amendment, but not for the original, it means you only support the amended version.
At any time there are bills with varying levels of support. Also, as we will see, there are various subgroups of members (caucuses), some self-selected and some automatically generated. Some bills will only be of interest to some caucuses, in which case percentages of those in that caucus will be of interest. External organizations can use a specific caucus (such as their members or supporters) for decision making.
Bills are accompanied by discussion areas where evidence of various sorts can be included.

Endorsements

Each member gets 100 endorsement points and 100 anti-endorsement points to allocate to other members. The member can move them around at any time. This turns into a continuous voting system that works like this:
  1. Each person starts with 1 vote, and gets 0.01 of a vote for every endorsement point.
  2. Now eliminate all the people with the lowest score.
  3. Then multiply everyone's endorsements by their current vote.
  4. And go to 2 and repeat.
This gives everyone a score, which is their highest vote before being eliminated.
This is also done within caucuses to give people a ranking within the caucus.
Members specify how important their caucuses are. The bills that they will normally be invited to consider and vote on will be determined by the current leaders in the parliament and in their selected caucuses. The arguments they will most readily have access to will be those endorsed by leaders. Of course all bills and arguments are available by diligent searching.
Anti-endorsements are also ranked so that anti-endorsement is more significant from people with high endorsement. People with high anti-endorsement scores will attract warning signs on their arguments.
Bills can be introduced arguing for specific people or groups to get endorsement or anti-endorsement and why.

Caucuses

Members are automatically added to geographical caucuses. Groups (or individuals) can create caucuses and determine the memberships.
The system can automatically determine groupings that can then get turned into caucuses. For example we can expect that members will be divided between pro- and anti- nuclear. The system will detect such clusters by similarities in voting and in endorsements. Members of such clusters can turn them into caucuses, so that the system will automatically discover leadership in those groups.

Wednesday, February 20, 2019

Women need to lead to get community action

If you want to sell community action (vaccination or climate change), don't put up a mansplainer talking science. Get an older women with technical leadership and high status to put the message of community solidarity and express anger at the opposition.
E.g. "My grandchildren have been vaccinated. But they might still be vulnerable because vaccines sometimes fail to fully protect. This isn't a problem if everyone is vaccinated because disease can't spread without many potential victims. Anti-vaxxers make all our children vulnerable. They provide enough vulnerable children to allow an epidemic, and some vaccinated children are effected. This is not about individual health, it is about community health. Everyone needs to get behind it."
For Climate Change she might say "Managing the climate is the community's responsibility. There has to be rational evaluation of what needs to be done, and that has to come from the scientific experts looking at all the data, not just a cool day in July. The community that counts is the whole world because we all share the same air. We need to be part of that community. People who don't get behind this shared effort are harming us all, and particularly our children"

Monday, February 18, 2019

Chemicals attacking the microbiome

The microbiome is complicated

Recent research into the health problems being experienced by bees showed that it was not just insecticides causing the problem. What made the bees sick was a combination of small amounts of insecticide with fungicide.

Why would fungicide affect bees? I think the answer is very clear from other research into the microbiome: the cocktail of living things in and on humans and other multicell creatures such as bees.

We used to think bacteria was bad, so antibiotics must be good. Then we learnt that we are home to lots of beneficial bacteria that are damaged by antibiotics. But now we know that the microbiome is a cocktail of bacteria, fungi and viruses. And they are all involved, some more beneficially than others.

Whether we are looking at insect health (which is urgent), or the health of humans, we need to investigate the overuse of a whole range of chemicals, whatever their targets, and even if they are not intended for biochemical effects. And we need to look at the effects of combinations. Given the large (and indeed exaggerated) reaction the general population has to radiation, I believe they can be induced to demand action on this.

[update 2/5/2019: More evidence that fungicide causes health problems: https://www.diabetes.co.uk/news/2019/apr/Additive-found-in-baked-goods-linked-with-possible-type-2-diabetes-risk-91661279.html]

Monday, February 11, 2019

Racism in Tim Flannery's latest book

Tim Flannery is a great bloke, fighting hard to save the world. The last person anyone, including himself, would suspect of racism. But we all want to think well of ourselves, and this very easily extends unconsciously to wanting to think well of groups that we belong to, compared to others.

For Europeans this now embraces Neanderthals, and the hybrids that are descended from them. So in Tim Flannery's book "Europe, A Natural History" we see on page 6 at the end of the Introduction the assertion that the rise of culture resulted from the "hybrid vigour" of the combination of humans from Africa with European Neanderthals. I will easily disprove that hybrid vigour could have anything to do with it. Which shows how tempting such ideas must be to get past the guard of an expert like Tim Flannery.

I have previously written about the genetic advantage of culture (https://grampsgrumps.blogspot.com/2015/02/what-is-culture-for.html). If the genes for culture arose in hybrid populations in Europe, the question arises of how it then appeared in Africa? An easy answer comes along: parallel evolution. This, we remember, was the explanation the Chinese (and others) had for the rise of Homo Sapiens in multiple places. But it was easily disproved by genetic analysis. And even before that, the people who understand evolution know that there is no such thing as parallel evolution. If some substantial and complicated evolutionary change occurs in multiple places at roughly the same time it is because it is all descended from a single point. We can be certain that this is the case with culture. It arose in pure African homo sapiens. It might have taken a slightly different path after hybridization. It is a tempting fancy for us hybrids to regard that slightly different path as superior. If it was a harmless fancy we could let it pass. But it isn't, and we have to reject it.

Friday, September 7, 2018

My Crackpot Theory

There is matter and antimatter, but luckily for us there is more matter, otherwise the matter and antimatter would annihilate each other leaving nothing but photons. Why is there more matter? I have an answer! Since I'm not an expert in the field, and haven't done due diligence on it, it is, by definition, a crackpot theory. But I like it.

The great Richard Feynman said that antimatter behaves, for computational purposes like ordinary matter travelling backwards in time. Let's take that literally.

Our Universe is expanding from the big bang in accordance with Einstein's equations. Another solution of those equations would be a universe contracting to a reverse big crunch. It doesn't take too much imagination to imagine that immediately before the big bang another universe was coming in to a big crunch. You can regard that as being an earlier part of our Universe, but I want to imagine it as separate. I'll call it the negative universe since it is in negative time if the big bang is at zero.

Particles don't have well defined positions in our quantum universe, which lets them tunnel through apparently insurmountable barriers. So it is possible for some particles moving forward in time from the negative universe to tunnel through to our Universe. And similarly some of our particles moving backward in time (i.e. antimatter) can tunnel through to the negative universe.

So naturally we end up with an excess of matter, and the negative universe ends up with an excess of antimatter. I suggest that in the negative universe, the arrow of time defined by increasing entropy would point backwards. So folk living there would perceive that antimatter as moving forward in time, and perceive the universe as expanding.

It's such a beautiful symmetric picture, it just has to be.

It makes a prediction. In the very early universe particles don't survive long before being annihilated by antiparticles. In the current model where matter is assumed to be slightly different from antimatter, the preponderance of matter happens later. In my model it would appear earlier, so that there is net matter even when the energy level is very high and particles don't last long.

Sunday, April 29, 2018

Bi-quarternions and 4d Clifford algebra

You took a shortcut weeks ago in the program you're writing and now it's biting you, and you know you have to go back and rewrite stuff, but to put that off you start reading Conway and Smith's "On Quarternions and Octonions", and then you wake up in the middle of the night thinking about bi-quarternions. What is the Clifford algebra way of thinking about them?

In Clifford algebra, rotations are given by elements of the even sub-algebra (acting on the vectors to be rotated by the sandwich product). It forms a sub-algebra because the clifford product sums the dimensions so even goes to even. Multiples of the same even element result in the same rotation with the sandwich product, so the degrees of freedom of rotation is one less than the dimension of the even subalgebra.

The grades of the n-dimensional Clifford algebra follow Pascal's triangle and give a total dimension of 2ⁿ. The 0-grade is scalars. There is only one part of the highest grade, so it is called a pseudo-scalar. The 1-grade is the vectors of the base vector space. Orthogonal to each vector is an (n-1)-grade element that is a pseudo-vector which behaves a lot like a vector.

So let's start with 2-d. 1-d is left as an exercise. We have a 2-d vector space. The Clifford algebra consists of: scalars (1-d); vectors (2-d); oriented area elements (1-d, the pseudo scalar). 1+2+1=4. The square by clifford multiplication of the unit area element is the scalar -1. That's suggestive! The even sub-algebra is the scalars plus the area elements. Yes it is the Complex numbers. Because the complex numbers and the vectors are both 2-d it is easy to get them confused. 

In 3-d the Clifford algebra consists of: scalars (1-d); vectors (3-d); bivectors (pseudo-vectors, so also 3-d); oriented volume element (1-d pseudo-scalar). 1+3+3+1=8. The even sub-algebra is the scalars and the bivectors. Yes it is the quarternions. Once again there is potential confusion, this time because the vectors and the bivectors have the same dimensions.

The product of n vectors is called a versor. Up to 3-d there are no Clifford algebra elements that aren't versors. The square of a versor is a scalar. So it makes sense to take the sum of squares of the components of an even subalgebra element, then take the square root to get a norm. With a bit of other calculation we find that they form normed division algebras. This breaks down in 4-d where there are bivectors which are not versors.

In 4-d we have: scalars (1-d); vectors (4-d); bivectors (6-d); trivectors (pseudo-vectors 4-d); oriented hypervolume (1-d pseudo-scalar). 1+4+6+4+1=16. The even subalgebra is the scalars, the bivectors and the pseudo-scalar. But we are told that rotations can also be represented by a pair of quarternions. Here's a way to see two quarternions in the even subalgebra:

We like Clifford algebras because so much can be done without picking distinguished directions to be basis vectors. But somehow we keep finding it convenient to specify a basis, as we will here. Let e1, e2, e3 and e4 be a basis of our 4-d vector space. Now consider just the 3-d subspace formed by e1, e2 and e3. The even subalgebra is the quarternions consisting of the scalar and the bivectors generated by e1e2, e2e3 and e3e1.

This leaves from the bivector basis: e1e4, e4e2 and e3e4. Plus the pseudoscalar e1e2e3e4. Now if we define a new multiplication as the Clifford product times (or divided by) e1e2e3e4 then we get a new model of the quarternions: this time with the pseudoscalar as the scalar. So I've divided the even subalgebra into 2 quarternions.

Well I should check this out more carefully, but I'd better get back to my program that needs fixing.

Monday, June 12, 2017

Logic Programming in Functional Style

[This was also posted on the WombatLang blog, but it is self-contained and might have wider interest.]
[N.B. There is code. In https://github.com/rks987/appendTest I have hand-compiled the example below to an AST, and written a pretty-printer to check it is right. Next step is to write an interpreter for the AST. How hard can that be :-).]

Wombat was designed to be a (mostly) functional programming language with some logic programming capabilities. But it turned out that you can't be half-hearted about logic programming. However the functional roots shine through, giving Wombat a familiar look for most programmers. But behind the scenes, unification is everywhere.

Procedures are also ubiquitous in Wombat. They always have exactly one input and one output. Even things that aren't procedures can act like procedures. In normal functional programming the input is specified, and the output starts out as a hole that the result gets put into. In Wombat both the output and the input are passed to the procedure. Either can be a hole. One or both might be structures which include holes. Consider

(x,y) = f(1,2)

Here "=" causes unification. One might think that the function f will be called, it will return a pair, and unification will then happen. But this is not how Wombat works. Instead (x,y) is unified with f's output, (1,2) is unified with f's input, and execution of f then starts.

Before we look at a more interesting example, some relevant features of Wombat are:
  • An identifier is preceded by backquote when used for the first time. It starts life as a hole, and like all holes it can only be filled in once. `x:Int; x=3 (Explicit typing is optional.);
  • An explicit procedure (closure) is just an expression in braces -- { x+1 } ;
  • A closure's input is $ and its output is `$. The input is commonly a tuple which is unpacked immediately, and $ is never mentioned again -- { $ = (`x,`y); x+y } ;
  • If `$ isn't explicitly unified, then it is unified with the whole expression: {$+1} means {`$=$+1}.
  • A list is given by elements in square brackets separated by spaces. The +> operator adds an element to the head of the list and is invertible.

Here is the classic list append program (using the caseP procedure, rather than the almost identical case syntactic sugar):

`append = {
   $ = (`x,`y); # 2 input lists
   caseP [
       { x=[]; y }
       { x = `hdx +> `tlx;
         hdx +> append(tlx,y) }
   ] ()
};

print( append([1 2],[3 4])); # [1 2 3 4]
[1 2 3 4] = append([1 2],print(`a)); # [3 4] -- print returns its argument
[1 2 3 4] = append(print(`b),[3 4]); # [1 2]

Consider the last line. Execution proceeds concurrently:
  • x is unified with print(`b) and y with [3 4];
    • print is called with its `$ set to the hole x, and its input set to the hole `b. Since it is going to have an effect it has to stall waiting for one or other to be filled. If there were any later effects they would also stall, even if ready to go, because of a lexical ordering requirement.
  • At the same time caseP is called with input set to unit (=()), and output set to the output of the whole procedure (i.e. [1 2 3 4]) since it is the last expression. Now caseP calls all procedures in its list expecting precisely one to succeed. In this case:
    • Closures execute in a sandbox where unifications with holes from outside are tentative and only make it out if the procedure doesn't fail. If the outside hole gets filled in while the closure is executing then the unification is made firm if it agrees with the tentative binding, or the closure fails if it doesn't.
    • So when we look at the first procedure in the caseP, it tentatively unifies x with [], then tries to unify y=[3 4] with `$=[1 2 3 4]. This fails, so that closure fails.
    • At the same time we start the more complex 2nd closure. The first line creates a relationship between the 3 holes: x, hd and tl. The 2nd line then unifies [1 2 3 4] with (hd+>append(tl,y)). this sets hd=1 and unifies [2 3 4] with append(tl,y). So we call append recursively with `$=[2 3 4] and $=(tl,y).
    • The following time that append is called we have `$=[3 4] and then the first closure succeeds (while the 2nd fails), so that when it returns it establishes its parent's y as [3 4], tlx=[] and hdx=2. This resolves the previous line giving x=[2].
    • When this returns the output of print(`b) is unified with [1 2] which in turns sets b to [1 2] and allows the print to proceed.
    • If we weren't going to use b subsequently we could have just written print(_) because _ is always a new hole.

Saturday, May 6, 2017

Networks of People

When you want to build a network, such as the Internet or the national road network, you don't put in a lot of random links. Instead you have a mixture of local links forming local clusters, and long distance links between clusters. This can continue for more levels.

Now we come to the interesting observation that people divide up, corresponding roughly to the right/left divide in politics, between people who want friendship and support to extend only to their local group (or groups), and those who want to extend help and support more broadly, with some extending that to all humanity. This division is just what you need to efficiently build networks of people.

A key feature of humans is our ability to move from simple things to multi-level recursively defined things. We see that most obviously in human language compared to the simpler communications of our related species. And indeed it is notable the way that nations relate to each other in ways that seem similar to the way that humans interact.

Friday, July 29, 2016

The True Believers

Beliefs are either objective, derived ultimately from evidence, or subjective and derived from some internal revelation or the belief that some other person has had a true internal revelation and communicated it honestly. While not wanting to denigrate people’s beliefs in subjective truths, the success of Science and the triumphs of our modern world rest entirely on objective truths. Yet the voting public has partly lost faith in the people claiming to be purveyors of objective truth. The water is muddied by people lying for fun and profit, and by Science shooting itself in the foot with a lot of overconfident deduction and speculation having little or no support.
There is an urgent need to restore the public’s faith in Science and objective truth, which are the foundations of our civilization. Can we perhaps get some intelligent supporters of objective truth to make some sacrifices that will bring the matter to the public’s attention in a good way. Take the following as an example of the sort of thing that might work. Better ideas are welcome.
The proposal is to create a Monastery (or hopefully a network of many Monasteries) dedicated to determining the objective truth, and to discovering and “excommunicating” those attempting to pervert the search for the truth. Those who had merely strayed, making bad deductions or trusting the wrong people, would be formally forgiven and “born again”.
The monks would put their wealth in a financial Trust, and wear some modest uniform. Trusted folk from outside can act as lay advisers. There are investigations, in which evidence is collected. Then there are trials in which the two sides are debated. When nobody is available to support one side, devil’s advocates are appointed. Monks clinging to views that most monks think have been completely disproved can be demoted back to applicant status and thus expelled from the monastery.
And, of course, all this is live streamed on the Internet. Viewers can participate to varying degrees based on their level. The highest levels are applicants (wanting to become monks) and lay advisors (who typically are unable to become monks for some reason). Next are trainees learning to evaluate objective truth and preparing for tests that will get them up to that higher level. Finally there are supporters. All these lower levels can make themselves available to help the monks. The general public have a ringside seat on this vigorous search for the truth and the truthful, and the key objective is to get them to understand and support it.
Finally a little riff on the desirability of the truth. The truth can’t lead us astray if properly understood. However properly understanding it is not always easy when our culture has implanted so many subjective truths so firmly in our brain. Let’s take a simple case: bullying. You will pardon my expression of personal non-expert opinion as if it was truth. It is purely illustrative, so it doesn’t matter if wrong. Humans used to form groups of several hundred related individuals divided into families. The families have a status order, like a pecking order in chickens, and it is there for the same reason as all status hierarchies: because without it there would need to be a fight over each conflicting intention. The human situation is similar to (some species of) macaques, with family status passed from mother to daughter. Everyone needs to know their place and one of the ways this is done is by bullying. If A bullies B and B’s relatives don’t rush in then B knows that A and A’s family have higher status. In its natural setting bullying is only necessary when there is doubt. Of course in our city lifestyle this all breaks down. Status is a mess, with endless struggles leading to a lot of bullying. Now I would say that we need to understand this to make good decisions about dealing with bullying (which doesn’t even do its job in our society). But a lot of people would say “You can’t say that bullying is natural. That condones it.” Actually it only condones it if you have the subjective idea that human nature is good by default and departures from the good represent some malfunction. The correct view is that human evil, such as bullying, needs to be dealt with in human ways, not by trying to find and fix a malfunction.
Still we need to accept that the general public is not going to easily accept things which contradict their firmly implanted subjective truths. Such matters need to be dealt with carefully, and avoided as much as possible.

[update: This interact with the proposed Truth and Expertise Network (previous post) because people can just say "I trust the monastery" and get good feedback on the stuff they read.]

The Truth and Expertise Network

While specific facts and resulting deductions are at the core, in the main we are interested in identifying those who are, and those who are not, good sources of the truth. Most particularly we want to identify those who are lying for their own advantage, since those lies are much more likely to cause harm than merely mistaken beliefs. We are mainly concerned with facts that are relevant to public policy, but even there we come to issues where well-meaning folk would say “you shouldn’t say that, even if it is true”. We’ll leave such delicate considerations until the next blog post.
For those who don’t want the technical details, the general idea is this: Individuals can specify their level of belief about claims made, about the motivation for claims, and about the trustworthiness of other individuals. Software can then warn you about claims based on the claim itself, or the people making it. The software would only follow links from the people you trust (and that they trust, etc). This might need some social engineering to actually work, and that is described in the following blog post.
So the idea is that participants in the scheme will have one or more public-private keypairs. These will be used to sign assertions of various sorts, discussed below. They will be of no use unless (a) people link to those keys in various ways; and (b) the assertions are made public (at the very least to some of the linking people).
People can just make their main public key publically available in places they are known to control. Or give them to specific people. They can also have keypairs that they don’t advertise as belonging to them, but endorse them as reliable as if they were some other unknown person. They can then  make assertions that can’t be attributed but can be used by people who trust you and the people you trust.
I’ll list (some of) the assertions that can be made. Software running in the user’s machine, and the machines of those she trusts, and in central servers, will cooperate to provide the user with warnings of false claims, claimants lacking the expertise they claim, claimants seeking to mislead. Perhaps the most important things will be information about internal contradictions in the trust network. If your trust network supports incompatible claims then it is an indication of a problem, such as people in your trust network being overly confidant about an uncertain matter, or infiltration of the trust network by incompetent or bad actors. Tracking these things down will help everybody who wants to get a good handle on the truth.
  • “My prior (belief pending future evidence) for this claim to be true is P%” where P is between 0 and 100. The claim should be a block of text, [+ optionally a wider block of text from the same source providing context], + a URL giving the location.
  • “My prior that this claim is honestly believed by the claimant is …”
  • “I believe the claimant is … [with probability] acting on behalf of … [with probability]”
  • “I trust this person to only express honestly held beliefs”, giving a public key.
  • “I believe this person is an expert on …”
  • “I trust this person to choose others who are trustworthy” (thus allowing an extended network of trust).
Systematizing all that (and more) is a tough job. It is similar to the jobs done by the IETF (Internet Engineering Task Force), and maybe we need an OTETF (Objective Truth Engineering Task Force).

[update: Naturally browser plugins and other user software will make it as easy as possible for users to participate in this scheme.]

The Truth and Expertise Problem

Katherine Viner, writing in The Guardian, gives a comprehensive overview of the way social media and tailored news is disrupting the truth (https://www.theguardian.com/media/2016/jul/12/how-technology-disrupted-the-truth). It used to be that most of the population saw a common set of reasonably authoritative information from newspapers and TV. Now the real experts struggle to be heard over the cacophony, the truth is mixed up with falsehoods, and the way people get their news is highly likely to reinforce their biases.
But things are not all bad. Previously many lies got firmly established, and Viner gives the example of the Hillsborough tragedy. Now the population has learnt not to trust everything that is written down. This is a necessary step to not being misled. The problem is to get them to take the next step: to collect and evaluate the evidence like a scientist, then to think through the implications like a mathematician.
Well obviously that’s unrealistic. All of us, even the greatest experts in some field, are forced to identify experts that we trust when it comes to areas in which we lack expertise. The problem is that our faith in experts has been seriously eroded. In many ways this is a good thing. We now know that experts have put many people in prison through highly exaggerated claims about DNA and fingerprint identification. We know that many of the medical treatments that doctors have used, and advice they have given, are not supported by the evidence or results. We needed to take this step of treating experts with caution, but not go to the current extreme where expert advice is often completely ignored.
Experts need to be evaluated in some way. They won’t like that, but we need to achieve two things: Making sure that the experts we trust are not abusing that trust; And finding the real experts to trust in the midst of so many competing claims. We also need a process that can be respected by the public so that they and the media are inclined to select real and honest experts for their understanding of important matters.
This is the first of 3 blog posts. The 2nd will discuss a technological solution for identifying what is true, and why, and for identifying who is an expert, who is trying to mislead. The 3rd will provide a rather wild idea for a social experiment that might get the message to the public that there is an objective truth which is worth pursuing.
[There is a relevant new book “A Survival Guide to the Misinformation Age” by David Helfand, and here’s a review: http://physicsworld.com/cws/article/indepth/2016/jul/28/between-the-lines].