Posted: Jul 24, 2017 9:15 am
by GrahamH
Rumraket wrote:
GrahamH wrote:
Rumraket wrote:
GrahamH wrote:

My point is that the logic may be unsound. That doesn't need examples, and your vague recollection hardly counts, but thanks anyway.

You are right that with multiple levels of redundancy this could suppress the effects of otherwise deleterious mutations. But that's a lot of DNA wasted on "redundancy", and it doesn't explain the so-called c-value paradox.

Whether redundancy is a waste comes down to it's effect on reproductive fitness, does it not? Generally redundant systems are more reliable than non-redundant systems and could therefre potentially have higher fitness. I don't think it can be called 'junk'. On that basis most digital communications and storage media have large amounts of 'junk data' because they use redundancy to achieve reliable transmission. But reliability is functional and the 'junk' is there by design to serve a purpose. You can scratch a CD and find it stills plays but that doesn't mean the bits the scratch obliterates were 'junk'. If we take playability as a metaphor for reproductive fitness can't we see redundancy as a potentially selectable trait?

Yes, and there are systems with redundancy known from biology, but it is a problem of scale. Natural selection could simply not maintain it at the level of extra DNA observed in most organisms, this is what the math shows.

Imagine you had eleven thousand redundant copies of the same gene, but only one of them is ever required at any single moment. And it's a functional gene.
What would happen if we speed up time? Genome is replicated, chances are one of those genes gets a mutation. But would natural selection remove that mutation? Probably not, after all, there are 10.999 gene copies left to take over it's function. That mutation would probably be entirely invisible to selection. Next generation, what happens? Same thing again. Another mutation creeps in. Would natural selection remove it? Probably not, there are 10.998 remaining functional copies that could take over. And so on and so forth. You would probably end up with over 10.990 junk-genes deactivated by deleterious mutations. Natural selection simply can't preserve all those genes if they don't all see active use. And they'd only see use if all the other ones are rendered nonfunctional or have significantly degraded functions, or if there is some extreme demand on the level of gene expression.

Now there are cases where having many copies of a gene is actually beneficial, because they all see use. One example is, IIRC, the ribosome which exists in some species in up to 700 copies (and that is the highest number known for any functional gene). But that's only because the ribosome is seeing constant use, and having a single gene for it would simply not allow enough expression of that single gene for enough new ribosomes to be biosynthetized.


I wasn't aware that 'junk DNA' being discussed was anything like 'eleven thousand redundant copies of the same gene'. I was thinking more of a few copies of thousands of genes, or possibly different collections of genes that had comparable phenotypic effects. To stretch the metaphor, we might not see how four byte sequenced encoded on a CD relate to musical notes or phrases, or how other different bytes contribute redundant error correction, but none of those bytes are 'junk'. We could delete or alter lots of bytes and detect no impact on the sound. That is not a valid test for junk data in that case. Is a selection process applied at the level of sound produced it would tend to preserve the CDs with redundancy rather than those without in real-world conditions with non-zero error rates.

Thanks for your replies.
Anyway,