Posted: Mar 28, 2011 11:34 pm
by Shrunk
Rumraket wrote:Essentially, the error lies in claiming that Natural Selection can do no better than a random search. This is simply demonstrably wrong. Tsar tried dodging all the facts thrown at him by constantly referring to Dembski's use of the "No Free Lunch Theorem", in which Dembski argued that all simulations of evolution are "smuggling the solution in through a back door" to get around it. Even this is demonstrably wrong. This has even been pointed out to Dembski, who, upon realizing it defeats his argument, simply reasserted the claim in an even more convoluted and purely obfuscatory, semantic way. It doesn't matter to him that it's false, as long as it remains sufficiently convoluted and sciency sounding, then it's enough for the "flock" and we subsequently get people like Tsar pushing the bullshit here and elsewhere.


Jeez, you seem to have got that right. I don't read UD as a rule, and only came across this thread because it was linked on Pharyngula. But you really need hipboots,nose plugs and a good sized shovel if you step into that place. I mean, here's a post by one particularly verbose person, name of kairofocus, who's absolutely besotted with his own brilliance (Comment #320):

F/N: I pause to look at one of MG’s scenarios:

A simple gene duplication, without subsequent modification, that increases production of a particular protein from less than X to greater than X. The specification of this scenario is “Produces at least X amount of protein Y.”


1 –> But, a gene DUPLICATION aint “simple”!

(Observe: “protein” implies a cellular context of functional processes that are tightly regulated and integrated, a la the infamous Biochemist’s chart of cellular metabolic reactions. Talk about a Wicken wiring diagram well past the 1,000 yes/no decisions threshold!)

2 –> As the above implies, we are now also dealing with a regulatory process — one that will have its own Wicken wiring diagram to lay out the architecture of the control loop process [cf examples here] — that controls expression of the information in the genetic code.

3 –> So, let us ask, how do we get TO a “simple . . . duplication”? Or, more specifically, first, to the functional, controlled, regulated expression of genes and their replication when a cell divides?

4 –> ANS: By having a higher order regulatory network that responds to environmental and internal states and signals in a co-ordinated fashion. Thus, once we have gene duplication, we have already had something that regulates and expresses replication, which is itself going to be FSCI-rich, if the just linked diagrams are any indication.

5 –> That implied capacity, BTW VJT, is what seems to be pushing you over the threshold of CSI when you have such a duplication.

6 –> In other words the Dembski analysis is (correctly!) picking up that if lightning strikes the same unlikely place twice, something significant is happening.

7 –> Suppose the first protein coded for takes up say 600 bits [i.e. 300 bases, or 100 AA's].

8 –> Doubling such a short protein would jump us to 1,200 functionally specific bits, and the crude, brute force “a cubit is the measure from elbow to fingertips” 1,000 bit threshold metric would pick this up as passing the FSCI threshold, on which the explanatory filter would point to design as best explanation.

9 –> In short, (i) lightning is hitting the same unlikely place twice, which implies (ii) a capacity to target that place by replicating, regulating expression, etc, and that (iii) this is so unlikely on chance plus bare mechanical necessity, that the best explanation is design. For, (iv) designers are known to do such duplications, and to set up regulatory systems that have counting loops that control how many times something is to be expressed or done: do until, do while etc.

10 –> Of course, if a counter point in the regulatory network suffers a simple mutation that triggered the double replication of the gene, that narrow outcome is well within the range of a chance event.

11 –> But that is not a pure chance event, it is within a highly complex island of function, and so we are looking at hill climbing within such an island. (Notice the centrality of this islands of function in wider config spaces concept. Where also the focus of design theory is how do we get to islands of function, not how we move around within such an island, except that hill climbing implies a deeper level of functional complexity to promote robustness and adaptability. That holds for the FSCI X-metric and it holds for the Durston et al FSC FITS metric and it holds for Dembski’s CSI hot zone metric.)

12 –> So, by broadening the context from a protein molecule of say 100 AA’s exists, which is within threshold — so, just possibly by chance on the gamut of our cosmos, it could come about by chemistry, but of course without the cell’s context, it is not functioning, it is just an unusual molecule — to one where there is a duplication in a regulatory network, we have brought into focus a much wider set of complexity, that pushes us over the FSCI threshold.

13 –> In turn, that points to design as best explanation for the SYSTEM capable of that duplication.

14 –> This reminds me of the time I worked out and had to seriously reflect on the implications of the truth table (A AND B) => Q, namely that A => Q and/or B => Q.

15 –> Deeply puzzled [this is tantamount to saying that "Socrates is a Man" and/or "Men are Mortal" would INDEPENDENTLY imply "Socrates is Mortal"], I spoke with one of our budding mathematicians over in my next door neighbour Math Dept.

16 –> But of course, he said, blowing away my previous understanding that the interaction between the two premises was key to the syllogism. If men are mortal, Socrates is mortal. If Socrates is a man, he is mortal. (That is, the Maths was implicitly capturing a deeper reality than I had spotted.)

17 –> So, I came away with a deeper respect for the math, and a higher confidence in it.

18 –> The tickler? I was led to do the particular analysis, both by truth tables and by the Boolean Algebra, because of a theological issue over interpretation of a particular verse in the NT which in effect is of form (A AND B) => Q.

19 –> So, the deeper yet lesson is that reality — on abundant experience as well as the implication of the key first principles of right reason being models of reality — is a unified whole, and if we are capturing that reality in our models, we may be surprised by deeper connexions. But, we should respect them.

20 –> So, MG’s tickler case no 1 points to a deeper set of connexions, and the crude, brute force FSCI criterion and metric comes up trumps.


I mean seriously, WTF? How can anyone think that actually says anything meaningful.

Poor MathGrrl is doing a great job of not getting derailed, and simply repeating "Well, that might be interesting, but could you answer my question?" Unfortunately, she's starting to get a bit disillusioned (comment #360):

markf (334),

You seem very concerned that Mathgrrl has not addressed the presence of symbols as a sign of information and therefore design. It seems a bit rough, as her challenge was for someone to provide a mathematical calculation of the CSI or information in certain cases. After all many leading ID proponents claim that CSI can be measured in bits.

If you want to introduce a different criterion for information/design that is fair enough but it doesn’t answer her challenge and it is not necessarily an evasion on her part not to answer. She has done amazingly well to respond to so many different objections on this thread and she cannot be expected to respond to every different objection, especially when it does not answer her challenge directly.


Thank you, I am indeed resisting my usual desire to respond in detail to every point. There have been several interesting topics raised in this discussion, but I am trying very hard to keep this thread focused on getting answers to the questions I posed in the original post. I must say that I’m starting to suspect that those questions will not get answered.