Rumraket wrote:But Thommo, it absolutely doesn't matter what exactly the real probability distribution is, or the total sizes of the sets.

As long as it is 100% on the one set, and less than 100% on the other, the actual sizes of the sets are irrelevant. What matters is that the probability is 100% on the one, that means what we observe will ALWAYS be the case on that hypothesis.

Well it rather does matter, I can only assume that you aren't familiar with the content of the link I pasted about "almost surely", which is quite understandable if you don't have a mathematical background, it's pretty counterintuitive and non trivial if you haven't seen it before, but I'll do my best to explain if you can bear with a lengthy post. My apologies if you actually are already familiar with this idea.

Just because you've ruled out some options does not in fact mean that you have 100% in one case and less than 100% in the other. This is one of the (many) problems.

Consider the following (malformed) questions:-

What proportion of natural numbers are square numbers?

What is the chance that a natural number picked at random is a square number?

So, start listing natural numbers:

1, 2, 3, 4, 5, 6, 7, 8, 9, 10, ... (... means and so on to "infinity")

and their squares

1

2, 2

2, 3

2, 4

2, 5

2, 6

2, 7

2, 8

2, 9

2, 10

2, ...

equals

1, 4, 9, 16, 25, 36, 49, 64, 81, 100, ...

Now, consider the proportion of squares in each partial sequence, up to the first n terms.

1 = 100% squares

4 = 50% squares

9 = 33% squares

16 = 25% squares

25 = 20% squares

...

(this % drops asymptotically to 0 as n tends to infinity, it's given simply by the formula 1/√n for square numbers, since the kth square is exactly the k2th term, so there are k2/k squares exactly up to that point)So, even though there are "as many" square numbers in one sense (they have the same "cardinality" because there is a square number given by k

2 for each number k - a 1-1 correspondence) the proportion of squares tends to 0, and in the limit case of ALL natural numbers

is 0

exactly. If we were to pick "at random" (and this is a technical error, just as in the arguments - there is no well defined uniform distribution on infinite sets in this way) the chance we pick a square number is 0

almost surely. So if we rule out picking a square, or not we can argue that the chance of picking a square is still 0. This is why the size of the sets matter - it's possible to fudge any example where the sets are infinite (and it's actually an error assuming they are sets at all in the case of "possible gods" or "possible sets of natural laws" - they

aren't sets, they are proper classes) and of the same cardinality to produce

any answer between 0 and 1.

Please be advised, what I've said here commits the same errors as the argument - it is

not mathematically acceptable to talk about the "probability of picking a natural number at random with a uniform distribution" in this way. I've just replicated the problem to show some consequences. It's actually quite similar to an error some theist made in this thread:

http://www.rationalskepticism.org/nonth ... 0#p1797222He goes on for a loooooooong time about how if you have two options then the chance of them happening is 50/50 - assuming a uniform distribution when there's no basis to do so and it leads to utter junk. Garbage in, garbage out.

Rumraket wrote:It doesn't matter if the other set is 10, 100 or infinitely infinite in total size, as long as the observed evidence is less than 100% of that size, it will have a probability below 100%. That is all we need to know.

And we don't know it.

Rumraket wrote:=100% vs <100%. Then =100% wins, regardless of how big you make those two sets, or

how much less than 100% it is.

There's nothing wrong with this argument. The question comes down to "how probable is what we observe on hypothesis X vs hypothesis Y"?

If one probability is greater than the other, and it is because it is 100% on X while it's <100% on Y, then it doesn't matter whether it's actually 99.9999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999%, or 20%, on Y, what we can say then is that there

is some probability it could be different on Y. This is not the case on X, so the evidence is more probable on X, because it is 100% expected on X.

We still don't know it. Infinite sets are counterintuitive. We can rule out infinitely many options without ever going below 100%.

But this still misses the wider point that the probabilities you're comparing here don't tell us anything

anyway.

Suppose there's a coin head face up on my desk (Call this event "E" = "Coin is head face up on desk"), consider two competing hypotheses:-

(A) It spontaneously appeared head face up, out of thin air, having been conjured into existence by heads up loving fairies who only ever create coins facing heads up.

(B) It was tossed and landed randomly head face up on the desk.

Notice how I've

chosen these hypotheses - if A is true then there's a 100% chance the coin is head face up. If B is true then there's only a 50% chance the coin is head face up - tossed coins can land face down too!

Writing this a bit more neatly or mathematically we can say that P(E¦A)=1 and P(E¦B)=0.5; that is "Probability of E given A is 100%" and "Probability of E given B is 50%".

What does this tell us about whether we should prefer hypothesis A or hypothesis B? That is to say "what does this tell us about P(A) or P(B)?

Absolutely nothing. The probability of A is still 0. Such things do not and cannot happen. The probability of B is unknown, (the coin might have got there by some other mechanism such as being placed by a human, or having been minted there by a coin press) such things can and do happen, but we don't actually know what percentage of coins lying on desks are tossed rather than placed,

although we could estimate by conducting research and fitting a distribution.