Posted: Apr 16, 2012 4:18 am
by Rumraket
CharlieM wrote:
Just A Theory wrote:
Second, the translation of DNA into proteins is absolutely not algorithmically incompressible due to redundancy in the third nucleotide of virtually every triplet codon. GCT is functionally the same as GCC, in fact there are 64 possible codons and only 20 amino acids plus 3 stop sequences meaning that there is a large amount of redundancy in the genetic code. It is therefore trivially easy to compress the genetic code by removing some of that redundancy.


http://www.sciencedaily.com/releases/2012/03/120328142850.htm
By measuring the rate of protein production in bacteria, the team discovered that slight genetic alterations could have a dramatic effect. This was true even for seemingly insignificant genetic changes known as "silent mutations," which swap out a single DNA letter without changing the ultimate gene product. To their surprise, the scientists found these changes can slow the protein production process to one-tenth of its normal speed or less.

As described March 28 in the journal Nature, the speed change is caused by information contained in what are known as redundant codons -- small pieces of DNA that form part of the genetic code. They were called "redundant" because they were previously thought to contain duplicative rather than unique instructions.

This new discovery challenges half a century of fundamental assumptions in biology. It may also help speed up the industrial production of proteins, which is crucial for making biofuels and biological drugs used to treat many common diseases, ranging from diabetes to cancer.

"The genetic code has been thought to be redundant, but redundant codons are clearly not identical," said Jonathan Weissman, PhD, a Howard Hughes Medical Institute Investigator in the UCSF School of Medicine Department of Cellular and Molecular Pharmacology.

"We didn't understand much about the rules," he added, but the new work suggests nature selects among redundant codons based on genetic speed as well as genetic meaning.


So the redundancy of codons is an assumption based on ignorance that has been treated as fact with very little skepticism in evidence.

Irrelevant post of the day. Do you know what is understood by code redundancy? It's an interesting article in many ways, and opens up a whole new area of thoughts one can make on the nature of the code, but there are still 4 codons for an amino acid, even if the different codons have an impact on translation speeds.

The code is known to be highly robust against mistranslations, meaning there's an extremely high chance that, even with an accidental substitution during replication (textbook case of point-mutation), the resulting codon-change won't result in an amino acid change. Furthermore, during actual translation of the mRNA transcript, a non-heritable "mutation" can arise from misreads done by the Ribosome. Again, the codon redundancy (four for every one amino acid) manifests itself by ensuring the correct amino acid is picked.
It's even better than this however, when we move beyond mere codon redundancy, and take a look at the chemical and physical properties, ex. polarity of the corresponding amino acids to their cognate codons. Should a heritable substitution resulting in protein sequence change, or a misread during translation, actually still happen, the code is also arranged in such a way that codons that code for amino acids with similar properties have more similar codons.

Now, before we start ejaculating all over ourselves and each other with how incredibly intelligent and miraculous we think the code is, we should start by taking a cold shower and realize the code can still be improved, and isn't actually the best possible one. And furthermore that, there are a number of biochemical reasons for why at least some of the code has the structure it does, which means they aren't pure accidental evolution, nor supervising ID-designer foresight, but the result of selection constrained by physico-chemical nessecity. One rather large suprise to researchers working on the code's evolution was the finding that codes with significantly superior robustness and redundancy against mistranslations, are found, from an evolutionary perspective, relatively few selective steps away from the extant code, on a fitness landscape of robustness against mistranslation. That means the code could be significantly improved with 15-20% reduction in errors, by rearranging the codons in a few places. One wonders why a supremely intelligent, supernatural designer wouldn't do this. However, if one looks at the code from an evolutionary perspective, it's current structure seems to have frozen in place at a time when it's usage had become ubiquitous, and further modifications to it would have required significant losses of biochemistry that would have become fundamentally important to the pertinent organisms.
In that respect, the code is still regarded by many as a "frozen accident", in that the code has stopped evolving towards what are obviously significantly superior codes, because the code's job is now so fundamentally ingrained in the fundamental metabolism of all living organisms. Evolution has no foresight, and it cannot "go back" and start from an earlier step in code-evolution to reach a new and even better code. As with all other evolving things in nature, it's left to modify what's already there.