Sorry for the delayed reply, I went into hospital just a short time after my last post here and only now got released again (a gallbladder short of when I got there).
romansh wrote:Brownian motion for me is a transport phenomenon where an observable particle moves in a random way because of transfer of momentum from much smaller atoms or molecules to the observable particle.
Is this not Brownian motion? I am not suggesting that using a stochastic process to describe a deterministic process is unreasonable. Just ignoring quantum phenomena for the moment Brownian motion is chaotic ... in the sense we cannot have a Laplcian demon on hand to do our calculations.
Well, when you talk about single molecules smashing into each other you ignore QM at your own peril... But a more apropos point here is that there is a mathematical entity called either Brownian Motion or Wiener Process. That is a stochastic process. It is used (apart from other things - one can make good use of this type of process in other fields) to describe Brownian Motion in Physics. That they are homonyms is somewhat unfortunate, but that's why I added the alternative name to my post. (I just checked Wikipedia and it offers a disambiguation in the same way: "This article is about the physical phenomenon. For the stochastic process, see Wiener process")
romansh wrote:I think it has to be more than just statistical independence. Pseudorandom number generators will pass statistical tests (at least the straight forward ones). But I agree with you that ideally they should pass statistical muster. IMO
They won't for an arbitrarily large sample. For any given PRNG and any particular suite of statistical tests, there exists a number N, so that a sample of N numbers from the PRNG will fail at least one of the statistical tests. We can generalize this WLOG by only considering a bitstream for some PRNG with a binary seed of n bits. The Seed will have an H=n and the generated bitstream will have H=n as well (for obvious reasons). But for a particular battery of tests it will appear to have a larger H, here H
appaerant=N. We can then define D=N-n and call it the stretch of the PRNG with regard to our test battery. There are some pretty good PRNGs with large values for D, given common statistical tests. But there have also been epic failures, like RANDU, which had a large stretch for a wide range on univariate tests. But throw in autocorrelation and it's just miserable. The Mersenne twister, which is pretty much the standard PRNG today passes DIEHARD, but not TestU01.
DrWho wrote:The QM probability equations with regard to the decay of an atom assign a probability to
when the atom will decay not
if it will decay.
Let's get a bit more formal here. In the most simple case (without additional decay modes), the decay is described by an exponential distribution. We can look at a somewhat simpler model by only checking at discrete points in time. From some arbitrary starting time we check back after nl, where n is an integer and l is the half life. At n=1 the probability that the nucleus has not decayed is 0.5. At n=2 it`s 0.25. Generally at n it is 2
-n. And so for finite n there is always a positive probability. After infinitily many n the probability is 0, but - as noted before - that does not imply impossibility. In fact the not simplified model (an exponential distribution) gives a probability of 0 for the decay at any point in time. By your logic then QM predicts that it will never decay, since all probabilities are 0. That's not how probability theory works.
DrWho wrote:When they start speculating about the chance of whether it will decay or not, they have officially stepped outside of QM theory and are indulging in their own speculations and not distinguishing between the two.
Or they might have some passing knowledge of that particular branch of mathematics. In particular they might be able to distinguish between sizes of sets - the relevant bits here are countable sets and uncountable sets - and how this makes it important to distinguish between certain and almost certain events.
DrWho wrote:Edit: the same point can be made with respect to measuring the spin of an electron. It is uncertain (probable) that the spin will be Left or Right. It is certain that it will be one or the other. It is not a matter of chance that will have some definite spin as a consequence of the initial conditions. It is a matter of chance whether the spin will be Left or Right.
a) This is not the same point. Here you have a sample space {Left,Right}, which is countable (and in fact finite). This supports only a discrete probability distribution and for discrete probability distributions a probability of 0 is in fact equivalent to impossibility. The decay of a radeonucleus has a sample space of [t
0,infinity[, which is a semiopen interval on the reals and crucially uncountable. The probability distribution is absolutely continuous and given absolutely continuous probability distributions a probability of 0 does not automatically mean impossible. There's some seriously problematic maths hidden in the "same point".
b) Any well defined random variable makes use of a triple containing a sample space, a sigma-algebra and a probability measure. The sample space contains all possible outcomes, the sigmal algebra contains events - subsets of the sample space and the probability measure assigns probabilities to the events. For any random variable it is clear that only values allowed by the sample space can come up. If your point is that there is determinism provided that there is a sample space, you are stretching the meaning of the term to something that encompasses every random variable. You can't play that fast and lose with maths without getting burned.