Posted: Jun 11, 2011 11:43 pm
by Paul Almond
jamest wrote:
Paul Almond wrote:Incidentally, a test for MWI was proposed, but I don't advise doing it.

You cannot arouse my curiousity like that and say nothing. What's the test?!


The idea is known as "quantum suicide". You set up an experiment in which the following occurs automatically.

Some quantum event occurs with two possible results - heads or tails.
If tails occurs, you are immediately killed by some automatic mechanism, before you had chance to realize. For example, a gun may be set up to shoot you in the head.

You set the apparatus up so that it will do this a large number of times - lets say 100 times.

If you are alive after going through many repetitions of this, the idea is that if MWI is false, it is very unlikely that you should be alive. On the other hand, if MWI is true there is always going to be a future in which you survive. The idea here, and it is obviously somewhat controversial, is that you never observe the futures in which you are dead, so they can be left out of your statistics: from your perspective, if MWI is true, just before you go through all this, you are supposed to have nothing to fear as you should fully expect to survive and, at the end of this, your continued existence will effectively have shown you that MWI is true.

One thing I should point out is that the idea that finding yourself alive is evidence that MWI is true is actually based on a Bayesian calculation. The idea is that, beforehand, you can calculate the probability of surviving if MWI is false (very low) and the probability of survving if MWI is true (supposedly 1 if you buy into this). No matter how unlikely you think MWI is, provided you give it some non-zero chance of being true, if you survive a long enough sequence like this, the idea is that the usual Bayesian calculation will say that probability of MWI being true is effectively increased towards 1 as much as you want.

Of course, there are objections to this, and they will tend to be based on one or more of the following:

1. MWI is false - if MWI is false, none of this is relevant.

2. If MWI is true, it doesn't follow that you can just rule out the branches where you die - ruling them out is relying on a particular view of human continuity. You might think that that you have a 50% chance of dying each time even if MWI is true, and if you do you will reject this.

3. Some people may raise issues with the quickness which your death results, thinking that this may be an issue.

4. Even if MWI is true, and even if you buy into the "you should expect to survive if MWI is true" idea, surviving doesn't actually tell you anything. I think that this is the case, and that the experiment tells you nothing - no matter what you assume about human continuity and whether the branches where you die are "futures" for you. Let me put it this way - if I were forced to go through this, and survived 1,000,000 heads/tails events, at the end of it, whatever probability I assigned to MWI being true would not have changed at all. This may seem counter-intuitive. I think some people would think, "This future exists in an MWI reality, but the chance of it existing in an MWI reality is so remote that I must be in an MWI reality." I disagree. The idea that this whole business tells you anything is, in my view, a misuse (accidentally) of the Bayesian method. I do actually have a proper argument to show this, but it is an exercise in "statistical ontology" and is a bit involved. It would be best for everyone just to take my word for it, though nobody will as this is a site for skeptics.

In any event, I am not suggesting that anyone should do this.