RatSkep Science writing Competition - Hall of Fame

All articles submitted to the competition.

Anything that doesn't fit anywhere else below.

Moderators: Calilasseia, ADParker

RatSkep Science writing Competition - Hall of Fame

#1  Postby Mazille » Dec 03, 2010 10:17 am

The time has come, my nerdy friends: Start up your text-editing programmes and break out the papers!


This month's topic is: Debunk a popular misconception.
Do we really only use ten percent of our brain? Are humans a "Blank Slate" at birth or not? Is homoeopathy a science? I'm sure that there are scores of scientific misconceptions of varying popularity you people can think of: Go on and explain to us why they're wrong.

The competition starts as of now. Participants can post their entries in this "Submissions" thread, while everyone else is cordially invited to comment on the relative merit of the entries in the "Discussion" thread here.

Submissions can be entered until Friday, the 24th of December, which is when the voting will start. Voting will end on Friday, the 31st of December, so that all is done and dusted this year.

Articles have to meet all the criteria laid down in the rules here:

The Monthly RatSkep Science Writing Award

We have a lot of professional scientists and very well-versed laymen on the forum and so we decided to make use of those formidable intellectual resources. We challenge you to write an article about a specific topic - which will be revealed later on - and enter it into a competition for "The Monthly RatSkep Science Writing Award"!

Now, to give you an idea of how this competition is going to work:
Every month we will give you the opportunity to take part in this competition. The goal is to write the best article covering a scientific topic of your choice - although with certain restraints. For each round of the competition we will set a general topic (e.g. "Our Solar System", or "The Subatomic World"), from which you can choose any field of interest to write about. After we have announced the general topic of a new round, competitors will have three weeks time to write their articles and enter them in the competition (see below for formal criteria) and after those three weeks users will have another week to vote for the best scientific article.

How is entering the competition and voting going to work?
    1. We will have one thread where people can post their articles and enter them in the competition. This thread will be moved from public view after each round of the competition and a new one will be opened for the next round. Only competitors may post there, and the articles will have to be approved by the staff, just like in the Formal Discussion forum.
    2. We will have another thread, where users can argue about the merits of each article that entered the competition and where they will be able to vote for their favourite article in the last week of the round via a poll. Every member will have the ability to cast three votes.
    3. There will be a third thread, where we collect all the articles that ever entered the competition. This way you will have access to a whole thread full of scientific goodness.
    4. After the general topic of a round has been announced, participants will be given three weeks to write and submit their works. Within this time the commentary-thread will be open for relevant discussions, but voting will still be disabled. After these three weeks users will be given one week to submit their votes. At the end of this period the winners will be announced and the submitted articles will enter the "Hall of Fame"-thread. Shortly after that a new round of the competition with an entirely different general topic will start.
    5. Each participant may only enter one article per round into the competition.


What are the formal criteria for the articles?

    1. Every article you enter into the competition has to be your own original work. Here on RatSkep we do not look kindly on plagiarism. It is, however, allowed to enter the competition with an article you have already posted earlier here on RatSkep, provided it meets the rest of the following criteria.
    2. Articles have to be at least 500 words long and mustn't exceed a limit of 3000 words. The maximum number of pictures and graphics is one picture per 500 words of text.
    3. Articles have to include an index of their sources, if you used any, and direct quotes have to be credited to the original author. We don't want to impose a specific quotation system on the competitors, but keep it clear, easily readable and stick to one system per article.
    4. Articles must cover either the general topic, or an appropriate sub-topic to ensure comparability of your efforts.
    5. Of course, the articles will have to be in the limits of the FUA, as usual.


Any article that does not meet all of the above criteria will be disqualified and cannot be entered in the competition.

Why should I enter the competition?

First of all, this is the perfect opportunity to boast with your superior knowledge. Furthermore the winners will get some shiny stuff:

    1. The authors of the Top 3 articles will get a nice banner for their signatures, in gold, silver and bronze respectively. The authors of those three articles can keep these banners as long as they want since there are going to be new banners for each new round of the competition.
    2. The best entries will also be featured in a prominent spot on our shiny new front-page as soon as LIFE manages to get it up and running.
    3. And last, but not least, we might have a few surprises in store for you...



Good luck and have fun! May the best articles win. :cheers: We are looking forward to your contributions.
- Pam.
- Yes?
- Get off the Pope.
User avatar
Mazille
RS Donator
THREAD STARTER
 
Posts: 19271
Age: 31
Male

Austria (at)
Print view this post

Ads by Google


Re: 2nd Monthly Science writing Competition - Submissions

#2  Postby Durro » Dec 09, 2010 4:04 am

Well, one of my pet peeves when it comes to popular misconceptions is the statement that "The Theory of Evolution is only a theory". With that in mind, here's my submission.

:beer:

Durro



Evolution : Is it “Only a Theory” ?

For those of you who are contemplating taking a plane flight in the near future, I'd like you to please keep one important and quite sobering fact in mind - the highly combustible, fuel laden, thin aluminium winged tube you’re thinking about boarding and streaking across the sky at speeds comparable to those of bullets, only works by using a Theory of Aerodynamics. That’s right, aerodynamics is only a theory. It’s never actually been scientifically proven that planes can fly and from the sheer number of them that crash each year - to the startled surprise of everyone concerned - it would be fair to say that clearly this theory has some serious issues.

Now if you read the above paragraph and sat there nodding sagely in agreement, then this article is for you. This short essay will endeavour to explain in fairly simple language why Evolution is best explained by a scientific Theory and why that also means in scientific terms that the Theory of Evolution is as truthful and as solid as anything you’re likely to find in the realm of science. Evolution is as real as Gravity and in many ways, is understood far better and has more evidence for it than Gravity. It's true that scientists can observe and measure many effects of gravity and make many predictions about gravitational forces but, in what may come as a surprise to you, they're not actually 100% sure what causes gravity. Even more galling, conventional concepts of gravity simply break down and don't apply at the subatomic level - apparently quarks, gluons and leptons never got the memo and go about their lives largely unaffected by gravity. However, scientists are very confident about the cause and effects of Evolution and the resultant scientific description is called "The Theory of Evolution".

The term “theory” is tossed around quite liberally these days and consequently, the strict scientific use of the word is chronically misunderstood. Intriguing characters expound colourful “theories” about who really built the pyramids or explain their pet "theory" about how Elvis faked his death and is leading a low profile life working in a 7-Eleven store in Buttplug, Tennessee. There’s even a “theory” about how shape-shifting alien lizards have secretly infiltrated governments and royal families around the globe to create a New World Order and I for one welcome our new reptilian overlords :bowdown: [/hedge betting]. However, as is the case with lawyers, 90% of all “theories” give the rest a bad name. The popular use of the word “theory” has become synonymous with “hunch”, “guess” or “idea” within the general community or worse, associated with the much derided term, “conspiracy theory”. Now while this may be fine for colloquial use or for creative flights of fantasy told by wide-eyed, dishevelled looking men with aluminium foil on top of their heads, in scientific circles the term “Theory” has an altogether different and far more substantial meaning.

In science, a Scientific Theory is not a mere guess or speculation. A Theory is a comprehensive and accurate description of natural phenomena. A Scientific Theory is supported by the best evidence available to hand and it can be used to accurately predict the outcomes of experiments and/or naturally observable events. A Scientific Theory can be verified independently by others and stands up against intense efforts to falsify it. Theories may be modified and improved as further evidence becomes available.

Examples of well known Theories include the previously mentioned aerodynamics - by the way, just so there are no misunderstandings, it’s mechanical failure, pilot error, adverse weather or other factors that cause plane crashes and not any failure of the Theory of Aerodynamics which is also a robust, accurate description of a natural phenomenon. Other Theories which are scientifically accurate, factual and evidentially supported include the Theories of Relativity, Gravity, Electromagnetism, Plate Tectonics, Nuclear Theory, Cell Theory, Germ Theory and so on. The workings of the computer monitor you’re reading this article on can be explained by Electromagnetic Theory, Germ Theory explains why you last caught a head cold, Nuclear Theory explains how atomic bombs cause so much devastation and the Theory of Evolution explains why you are how you are as a living organism.

Some people mistakenly believe that a scientific Theory goes on to become a “Law” once it’s been proven. This is fallacious, for Theories and Laws are actually different but equally accurate concepts in science. In essence, a Law describes the strict relationship between two or more variables or, to put it more simply, how something works, often with mathematical formulae involved. A Scientific Theory describes why things work and provides a comprehensive description and explanation of the mechanisms of various natural phenomena. In fact, there are sometimes both Laws and Theories which govern the same phenomenon in a complimentary manner. The Theory of Gravity explains why gravity acts the way it does while the various Laws of Gravity describe the relationships between force, mass and velocity of objects...how gravity is put into effect.

In science, a "fact" is essentially an observation or a data point. The evidenced, undeniable observation that species change over time (Evolution) is a fact. We have the fossils, we have the genetic interrelationships, we've even witnessed it happen within our own lifetime in both nature and the laboratory setting. This fact needs to be explained with references to various laws, hypotheses and other observations - and this is the Theory of Evolution. Like any other scientific Theory, it is open to scrutiny, objective assessment and experimentation. To date, nobody has been able to falsify the Theory of Evolution and in legitimate scientific circles, the Theory is uniformly accepted as the truth. It is the most accurate, most comprehensive, most evidentially supported explanation for the phenomenon of Evolution. If anyone claims that they can falsify the Theory of Evolution, there’s undoubtedly a Nobel Prize with their name on it waiting for them...but only if they can substantially back up their claims with a comprehensive alternative hypothesis, sound evidence and accurate predictions that is ! The claim that "My magic man did it !!!" does not constitute evidence, unfortunately for those who wish-think otherwise.

Scientific Theories do not get proven. "Proof" in science is the realm of mathematics and logic, where strict inviolable relationships exist and can be shown to exist without deviation, exception or variance. That the circumference of a circle equals 2 x pi x the radius of the circle can be proven. However, other areas of science are always open to new evidence to support, refine or even falsify a scientific theory. In fact, some scientists spend a great deal of time not so much trying to prove scientific theories as vigorously testing them to try and find chinks in the armour. The Theory of Gravity explains why you're not likely to spontaneously float out of your chair and hit your head on the ceiling while you are reading this, but there's always the incredibly small chance that this could be shown to be wrong (I hope you don't bump your head too hard if it does happen). And so in science, we leave open the remote possibility that even a robust Theory could be falsified whilst at the same time, acknowledging that some Theories are so strong and so unlikely to be falsified that they are simply accepted as fact. The Theory of Evolution is one of these. The jury is still out on Gravity though, so perhaps hold on to your chair just in case.

Falsification is the process which can destroy a Theory. "Falsifiable" does not mean that a Theory is incorrect or weak in any way, but rather that if it is possible that the Theory is incorrect or inaccurate in any way, there actually exists a tangible means for testing and showing this is so by observation and/or experiment. "Falsifiable" equates to "testable". If an accurate experiment or observation does not fit in with a Theory’s predicted outcome, then the Theory may be adjusted to account for the discrepancy where possible - making the Theory more accurate in the process - or, if the discrepancy is significant, the Theory may be regarded as inaccurate. Newton's Theory of Gravity have been superceded by Einstein's improved work on gravity and relativity, but although Newton wasn't entirely accurate, this work was close enough and useful enough to send manned space rockets to the moon. A previously held Theory that is shown to be slightly inaccurate may retain some measure of usefulness in the real world, as Neil Armstrong could testify to.

Evolution is both theoretically and practically able to be falsified by a number of other non-biological scientific fields. When the Theory of Evolution was proposed by Charles Darwin, DNA was completely unknown, electron microscopes were non-existent and many of the transitional fossils we are now aware of were yet to be discovered. The age of the universe and our planet were not known, radiometric dating wasn’t yet invented and the field of plate tectonics wasn't even a twinkle in it's father's eye. Any of these later scientific discoveries and data from many other co-existing fields of science could have been applied to and subsequently used to falsify various aspects of Evolution but, quite simply, none have been able to achieve this. In fact, these disparate discoveries have actually served to strengthen the evidence for the Theory of Evolution and increase our understanding of the Theory, which has been refined and improved over the last 150 years.

For example, falsification could have been and still may be achieved by finding a single fossil in the wrong geologic strata; the layer of sediments and rocks laid down over millions of years which can now be dated accurately. JS Haldane famously quipped that the Theory of Evolution could be overturned by finding a fossilized rabbit in the Pre-Cambrian. But out of the millions of fossils unearthed around the world, not a single one (deliberate hoaxes aside) has been found in a chronologically discordant rock stratum or sediment layer. Not one. In fact, the fields of physics, chemistry, anatomy, cellular biology, molecular biology, astrophysics, geology, palaeontology, palaeobotany, plate techtonics, seismology and so on could each conceivably produce contradictory evidence that falsifies the Theory of Evolution, but the exact opposite is actually the case. All of these fields provide data, observations and scientific principles which strongly support the reality of Evolution and the accuracy of its scientific explanation, The Theory of Evolution.

There are no alternative explanations that explain biological phenomena better than Evolution. Suggestions such as the ideologically driven “intelligent design” are not falsifiable. Unfalsifiable hypotheses have no practical use as there are no physical mechanisms to test them and they have no interaction with real life. They are simply irrelevant to science and, truth be told, to reality itself. We cannot disprove that the famous Russell's Teapot is orbiting the sun between the Earth and Mars. However, that proposition is not only unfalsifiable, but so absurd that it is not worthy of consideration for anything other than an intellectual exercise on what unfalsibility is. ID relies on an unknowable, undetectable, alleged supernatural agent that does not interact in the material universe (if it did, its actions could be detected and supported/falsified) and so ID is an irrelevant and vacuous argument to put forward in opposition to an accurate, evidenced, well understood physical process. ID has a distinct absence of supporting evidence and quite conversely, has been contradicted by enormous amounts of genuine evidence in favour of the Theory of Evolution. "Intelligent Design" is not a Scientific Theory - it is merely a speculative idea and truth be told, a very poor one at that.

Without wishing to write a Biology text, just some of the evidence for The Theory of Evolution worth mentioning includes :-

• The anatomical and genetic relationships between organisms
• Transitional forms between species
• Observed Evolution in nature and in laboratory settings
• Vestigial anatomical structures
• The bio-geographical spread of living creatures
• DNA similarities and functional redundancies
• Endogenous retroviral damage to DNA that can be tracked between related species with common ancestory
• Fossil records
• Observation of mutations.
• Artificial selection’s similarities to natural selection.

And so on. A more comprehensive list of evidences for Evolution can be found in the Talk Origins Archive at http://www.talkorigins.org/faqs/comdesc/

The evidence against Evolution is…well, nonexistent. The Theory of Evolution IS falsifiable, in that it is quite possible to find evidence to falsify it should that evidence exist. We know what to look for...but it's simply not there. No ideological opponents have been able to find any evidence or make any predictions that have falsified the Theory. When ID proponents or religious figures attack Evolution, it is with either a near complete ignorance of real science, a gross misunderstanding of real science, or by employing sheer wilful duplicity and deceit to fool the ignorant and/or the gullible. The ill-named Discovery Institute is but one religious organization deliberately engaged in a campaign of deceit and lies against The Theory of Evolution and their attempts to defend their ancient mythology in the face of modern scientific knowledge range from the bizarre to the (probably inadvertent) comedic genius.

So, the next time you hear someone state that Evolution is “only a theory”, you might ask them if they are prepared to jump off a tall building to test if Gravity is also “only a theory”. I would suggest that the answer would be no, and for good reason. Both Gravity and Evolution are self apparent and supported by voluminous evidence. They are both able to make predictions and others can test their principles via experimentation and/or observation. But of the two, Gravity is more likely to be amended once the workings of the universe are better known and the existence of gravitons and the Higgs Boson is confirmed or excluded. Evolution is a fact. The Theory of Evolution is the truth of the matter and is here to stay. And no amount of lies, obfuscation or deception from anti-Evolution advocates will change the truth.

So, in summary, is Evolution "only a theory" ? The simple answer is no. With regards to describing the mechanisms for Evolution and explaining the diversity of life on Earth, The Theory of Evolution is not "only a theory", it's the only Theory.

:cheers:

I would like to thank Mr.Samsa, who kindly proof-read an earlier draft of my essay and set me straight on a couple of critical points - particularly on the issues of facts, proof & evidence. He also helped me spell some of the big words. :dopey:

Thank you my wise friend. :cheers:


Sources & further reading :

1. http://en.wikipedia.org/wiki/Falsifiability
2. http://www.talkorigins.org/faqs/comdesc/
3. http://en.wikipedia.org/wiki/Scientific_theory
4. http://www.rationalskepticism.org/evolu ... -t402.html
5. http://en.wikipedia.org/wiki/Russell's_teapot


Edited : 10/12/2010 for a few typos and grammatical snafu's, plus a few extra comments thrown in.
I'll start believing in Astrology the day that all Sagittarians get hit by a bus, as predicted.
User avatar
Durro
RS Donator
 
Posts: 16736
Age: 50
Male

Country: Brisbane, Australia
Australia (au)
Print view this post

Re: 2nd Monthly Science writing Competition - Submissions

#3  Postby Mr.Samsa » Dec 15, 2010 1:05 am

DEBUNKING EVOLUTIONARY PSYCHOLOGY


Coyne, 2000 wrote:The latest deadweight dragging us (evolutionary biology) closer to phrenology is evolutionary psychology, or the science formerly known as sociobiology. If evolutionary biology is a soft science, then evolutionary psychology is its flabby underbelly.


Given the somewhat controversial title of this essay, it is perhaps necessary for me to preface it with a few disclaimers. Firstly, I am not a creationist and, for all intents and purposes, evolution is TrueTM. Secondly, whenever somebody voices their scepticism over the veracity of evolutionary psychology, they are often met with the retort, “Do you not believe that the brain is a product of evolution?” with the implication that since behaviors are the product of the brain, and the brain is a product of evolution, then behaviors are the product of evolution. This logic, however, is flawed for reasons I will discuss later but I do accept that the brain is an evolved organ with implications for resulting behaviors. And thirdly, this is not a broad scale attack on evolutionary psychology – instead, my focus is on the particular approach to evolutionary psychology known as the “Santa Barbara church of psychology” (Laland and Brown, 2002).

To distinguish between the two approaches, I will follow the nomenclature used by Gray, Heaney and Fairhall (2003) where they refer to this approach as Evolutionary Psychology (EP). This approach (used by popular authors like Steven Pinker in his “How the Mind Works”) attempts to explain a wide range of human behaviors, like whether we have an evolutionary preference for green lawns, with an emphasis on the concept of a modular mind, and utilises a cartoonish view of the Pleistocene – with all considered, we have to wonder whether it should be rebranded as the “Hanna-Barbera church of psychology”.

The Selection of Adaptive Explanations


The standard tool in this area is the explanatory strategy called “reverse engineering” (Pinker, 1997). While ‘normal’ engineering attempts to design solutions to problems, Evolutionary Psychologists argue that current features of the human mind can be explained as solutions to problems presented in our Environment of Evolutionary Adaptedness (Tooby and Cosmides, 1992). For this to be a valid explanatory strategy, Gray et al. (2003) argue that three criteria must be met:

1. all traits are adaptations
2. the traits to be given an adaptive explanation can be easily characterized
3. plausible adaptive explanations are difficult to come by.

As we would expect, these assumptions are frequently violated. Given what we now know about evolutionary processes, the first is perhaps the easiest to refute as it is obvious that not all traits come about as a result of natural selection. Ignoring the more complicated issues of genetic drift, pleiotropy, and epistasis, a perfect example of why we should be sceptical of this claim is Gould and Lewontin’s (1979) concept of the spandrel which, in simple terms, is a byproduct of the selection of another trait. In this sense, asking for the adaptive explanation for some behaviors is akin to asking what the selection pressure was that caused blood to be red.

Gray et al. (2003) discuss the latter two issues in more detail, but essentially the second claim is problematic given the lack of discrete boundaries for certain traits and they use Lewontin’s (1978) example of the “chin” to demonstrate this. They also extensively dissect the third criterion but a successful rebuttal of this is perhaps exemplified by Rosen’s (1982) suggestion that the only two constraints on adaptive explanations are the inventiveness of the author and the gullibility of the audience. It is important to note that I am not suggesting that we should abandon attempts to describe behaviors using adaptive explanations, nor am I saying that all behaviors are spandrels or the result of obscure evolutionary processes, but rather I am highlighting the fact that a plausible story is not evidence in itself. This position is described by Williams (1966) thus:

The ground rule - or perhaps doctrine would be a better term - is that adaptation is a special and onerous concept that should be used only where it is really necessary.


As is clear to most evolutionary biologists, and other interested sceptical parties who are less than enamoured by the efforts of Evolutionary Psychologists, the approach described by Williams above is rarely followed and instead these scientists appear to fire off adaptive explanations with reckless abandon, with their work often consisting of nothing more than folk wisdom with a post hoc just-so story explanation. To attempt to circumvent this, Gray et al. (2003) propose two “common sense” tests: The Grandparent Test, and the Lesser-Spotted Brown Gerbil Test. The first asks us to consider, “Does this work give us any insight into human behavior and cognition beyond popular knowledge?” and the latter asks, “Would this research be publishable in major international journals if the species was a small noncharismatic mammal rather than our own?”. Although these ‘tests’ are only guides and should not be used as definitive tools for ruling out instances of research, it is interesting to note that most examples of EP found in journals fail these basic tests. However, picking examples of this kind to discuss here would be like shooting fish in a barrel, so instead I will look at research behind the evolutionary explanations for cheater detection.

Cheater Detection


Cosmides (1989) proposed the “social exchange algorithm” which argues that for cooperation to be maintained in a society, we must be able to detect cheaters – with this consistent selection pressure present, humans must have evolved a cognitive mechanism to do this. This idea began with the research using the Wason Card Selection Task which utilises a generalised if P, then Q rule.

Image


The task is straightforward: Given the cards presented in the image above, which cards should you turn over to test the claim that if a card shows an even number on one face, then its reverse side will be red? The correct solution is that you should turn over the “8” and the “brown” card. The other cards are not logically related to the proposition.

Despite the apparent simplicity of this task, Wason (1966) found that only 10% of his subjects answered this correctly. However, the interesting twist on this logical conundrum is that when the context of the problem is framed in a way that is socially relevant, people tend to perform far better. That is, accuracy increases if we change the proposition to: “if you are drinking alcohol then you must be over 18”, and we changed the cards above so that they read “17”, “Beer”, “22” and “Coke”, where the correct cards to turn over are “17” and “beer” (Griggs and Cox, 1982). This means that when we replace the abstract logical notions with a real world example of the same relationship, but with the inclusion of a possible "cheater" (i.e. 17 year olds drinking beer) then we can successfully solve the task by hunting out the cheater. From this, Cosmides predicts that cheater detection is an evolved trait that should only be evoked in social exchange situations, where there is a requirement, benefit and cheater (in accordance with the assumptions of game theory). So here we have a theory that provides novel insight into human cognition and surpasses our folk wisdom, and clearly passes the Grandparent and Lesser-Spotted Brown Gerbil tests. Can we then be confident in our knowledge that this is an evolutionary adaptation? Unfortunately, not yet.

The alternative explanation suggested by Sperber, Cara and Girotto (1995) is that specific properties of the cheater detection scenario employ a more general “exception-testing” rule – this would account for the results in the cheater detection scenarios but it would not support an evolved mechanism that was responsible for cheater detection. If these “properties” could be identified and removed from the cheater detection task, and this resulted in the effect disappearing, then the empirical support for Cosmides’ theory would also disappear. To test this they developed a three-part recipe to ensure correct card selection:

1) the P-and-not-Q case is easier to mentally represent than the P-and-Q case (underage drinkers versus legal-age drinkers);
2) the P-and-not-Q case should be of more importance than P-and-Q case (breakers of the law versus followers of the law)
3) the rule should be clear and unidirectional (there is no implication that legal-age drinkers should be drinking beer).


When looking at Cosmides’ (1989) culture-specific form of the test (where the rule was “If a man eats cassava root, then he must have a tattoo on his face” and the options “Eats cassava root”, “No tattoo”, “Eats molo nuts”, and “Tattoo” were presented - with the "cheater" being the non-tattooed man eating cassava root), Liberman and Klar (1996) noted some inconsistencies between the cheater and non-cheater scenarios. Firstly, in the noncheating scenario there is no specific violating rule (e.g. a man with no tattoo eating a cassava root), secondly, the rule for cheating is strict and exclusive, whereas the non-cheating scenario has reduced importance through the use of qualifiers such as “usually” and “primarily”, and thirdly the non-cheating rule is more easily interpreted as being bidirectional.

To eliminate these confounds, Liberman and Klar reversed the conditions whilst maintaining the basic cheater detection structure and found that detection of non-cheating was at 70%, whereas cheater detection was at 30% - a perfect reversal of the results found in typical cheater detection scenarios. In other words, even though there were still cheaters in the design (non-tattooed men eating cassava root), by removing the biases from the setup so that "cheater detection" was no longer the less difficult task, subjects were less likely to search for them. With the effect completely disappearing under these conditions, it becomes clear that the effect is not a result of social relevance like Cosmides suggested, but is instead simply an effect produced by experimental confounds. In other words, cheater detection is a result of the “saliency” of the cheater in these experiments, and it is this saliency that gives people the correct result, and not the presence of a “cheater”.

The Evolution of Gullibility


This failure to properly adjust variables in an experiment (thus reliably establishing causality) seems to be regular feature of Evolutionary Psychology research, even when they meet the common sense tests suggested above, so why do these factoids spread so quickly and become cemented in popular thought? Do we have an evolved ability to be gullible of EP claims? Most people would reject such an idea, so what is it that separates these “ridiculous” claims from the ridiculous claims made by EP proponents?

It could be the persuasive logic and rhetoric that are often employed as support for their theories, in particular are the two claims that these behavioural traits are; 1) independent of global processes, automatic and often not part of conscious thought, and 2) universal across cultures. On the surface these two arguments appear to give us good reason to believe that a behavior is a result of evolutionary processes, as both points imply that it is instinctual or innate, and even though organisms have the ability to adapt over their lifetime, there is the hidden assumption that such traits are too “complex” to have been learnt. However, these arguments do nothing to support their claims.

The first argument is countered by Gray et al. (2003) with the example of riding a bike; it is clearly a specific process that functions independently from global processes and generally we do not need to consciously operate our bodies in order to successfully ride a bike. As a demonstration of this, ask yourself what you would do if your bike started to tip to one side. Most people reply that they would lean to the opposite side to right themselves but this is incorrect as it would result in the person falling off their bike – instead, when this happens the rider will turn the handlebars which rights their centre of gravity. This meets the requirements of criterion (1), but surely nobody would think that riding a bike is an evolved trait. Part of the reason why we can easily reject such a claim is that the learning period is obvious, but when this learning phase is more subtle (like with language) we are sometimes fooled into reaching the wrong, or premature, conclusions.

Now we need to consider the second argument – that if something is universal across cultures, then it cannot possibly be learnt. Is this true? Of course not. Whilst it is necessarily true that an evolutionary behavior would be universal across cultures, it is not true that a universal behavior is an evolutionary behavior. This is because species-specific behaviors can either be a result of an innate trait, or the result of shared species-specific patterns of experience. In other words, if the environmental factor that produces a particular learning experience is present across all individuals of the species, then we would expect them all to learn the same behavior. Again I turn to Gray et al. (2003) for an example, where they point out that all humans, no matter what culture you look at, will eat soup from a bowl and not a plate. The common environmental variable here is gravity and it gives us a universal behavior – however, the literature on the evolved “eating soup from a bowl” behavior is relatively scarce in the EP journals.

Conclusion


At this point it might not be entirely clear what the popular misconception is and, arguably, I should have outlined this in the introduction but some background on the topic was necessary. The popular misconception is that we have any significant understanding of evolved behaviors in humans. This belief is pushed out year after year in books by Pinker, Buss, Tooby and others, and it has now become more of an exercise in politics rather than attracting interest in science and rational thinking. Consistently these EP journals print articles discussing how women prefer the colour pink because it reminds them of red berries from the hunter-gatherer times of our ancestors (e.g. Hurlbert and Ling, 2007), ignoring the fact that the preference for pink in women is an extremely recent trend from the last few centuries (traditionally baby boys were dressed in pink and girls in blue), and ignoring the fact that hunter-gatherer roles were not separated by sex; or articles about how men are attracted to red lipstick because they look like vaginas (e.g. Elliot and Niesta, 2008). Even the more credible claims like cheater detection, or men being attracted to women with low weight-to-hip ratios (Singh, 1993), are plagued by poorly thought out methodological designs and an over-eagerness to ignore the relevant literature on possible learning mechanisms that could account for the data – so much so that they earn themselves the reputation of being ‘behavioral creationists’.

I started this essay with the disclaimer that this is not a broad attack on evolutionary psychology, and it is not a denial of the fact that the brain is an evolved organ, and I want to reiterate those points. The intention of this essay is to highlight the flaws and inconsistencies in the field, not to convince people to reject it wholesale, but instead to increase the scepticism surrounding this field. If a claim is made to the effect of “We evolved to do X/ prefer Y/ etc” then the question we should ask is “what research experimentally separated the learnt effects from evolved processes?”. The misconception is not that behaviors can, or have, developed in organisms as the result of evolutionary processes, but rather the belief that we can prematurely accept these conclusions based on faulty logic and an overreliance on (and misapplication of) evolutionary principles.

REFERENCES

Cosmides, L. (1989). The logic of social exchange: Has natural selection shaped how humans reason? Studies with the Wason selection task. Cognition, 31, 187-276.

Coyne, J.A. (2000). The fairy tales of evolutionary psychology: Of vice and men. The New Republic, 3 April, pp. 27-34.

Elliot, A. J., & Niesta, D. (2008). Romantic red: Red enhances men's attraction to women. Journal of Personality and Social Psychology, 95, 1150-1164.

Gould, S.J., & Lewontin, R.C. (1979). The spandrels of San Marco and the Panglossian paradigm: a critique of the adaptationist progam. Proceedings of the Royal Society of London B, 205, 581-598.

Gray, R. D., Heaney, M., & Fairhall, S. (2003). Evolutionary psychology and the challenge of adaptive explanation. In J. Fitness & K. Sterelny (Eds.), From Mating to Mentality (pp. 247-268).

Griggs, R. & Cox, R. (1982). The elusive thematic material effect in Wason’s selection task. British Journal of Psychology, 73, 407-420.

Hurlbert, A. & Ling, Y. (2007). Biological components of sex differences in color preference. Current Biology, 17, 623-625.

Laland, K.N., & Brown, G.R. (2002). Sense and nonsense: Evolutionary perspectives on human behavior. Oxford, UK: Oxford University Press.

Lewontin, R.C. (1978). Adaptation. Scientific American, 293, 212-228.

Liberman, N., & Klar, Y. (1996). Hypothesis testing in Wason’s selection task: social exchange cheating detection or task understanding. Cognition, 58, 127-156.

Pinker, S. (1997). How the mind works. London: Allen Lane.

Rosen, D.E. (1982). Teleostean interrelationships, morphological function and evolutionary inference. American Zoologist, 22, 261-273.

Singh, D. (1993). Adaptive significance of female physical attractiveness: Role of waist-to-hip ratio. Journal of Personality and Social Psychology, 65(2), 293-307.

Sperber, D., Cara, F., & Girotto, V. (1995). Relevance theory explains the selection task. Cognition, 57, 31-95.

Tooby, J., & Cosmides, L. (Eds.). (1992). The psychological foundations of culture. In, J.H. Barkow, L., Cosmides, & J. Tooby, (Eds.), The Adapted Mind: Evolutionary Psychology and the Generation of Culture, (pp. 19-136). Oxford: Oxford University Press.

Wason, P. (1966). Reasoning. In, B.M. Foss (Ed.), New horizons in psychology. Harmondsworth, UK: Penguin.

Williams, G.C. (1966). Adaptation and Natural Selection. Princeton University Press, Princeton, N.J.
Last edited by Mr.Samsa on Dec 15, 2010 10:54 pm, edited 1 time in total.
Reason: Fixed typos, added some citations and tried to clarify some issues in Cheater Detection section
Image
Mr.Samsa
 
Posts: 11370
Age: 31

Print view this post

Re: 2nd Monthly Science writing Competition - Submissions

#4  Postby Darwinsbulldog » Dec 16, 2010 9:32 am

My quick and dirty attempt. :-

“Hopeful Monsters” and “Living Fossils”

Creationist skepticism about evolution often revolves around macroevolution. Saltations, they claim, have a divine significance. The diversity and disparity of forms and kinds are the work of some creator-designer, for no natural mechanism is credible in their eyes.

Before we proceed, let us define evolution from Freeman & Herron (2007:p.800) :-

“Originally defined as descent with modification, or change in the characteristics of populations over time. Currently defined as changes in allele frequencies over time.”

Well, that is straight-forward enough. Evolutionists talk about “innovation”, but the creationists ask “Where does this innovation come from? “ Where does the information come from?” “Who was the designer of this complex life?”

“How then, can “living fossils” exist?”. Let us try to argue the creationist case. So-called “living fossils” are extant creatures that have very similar-looking ancestors in the past. The morphology does not seem to change all that much. But in a stable environment [like the sea], why do they need to change radically? A morphology that works does not need changing. Are living fossils defying evolution? Well, no, they are not. Their morphology does not change much, but what of their biochemistry or physiology? That is pretty difficult to find out from an extinct species of Coelacanth. The modern definition of evolution is not violated either, because we do not know if modern Coelacanths have the same genome as extinct ones. It seems unlikely that they would. Natural selection can only do so much, and even good genes can disappear over time. Genetic drift makes this inevitable. But where is it written in stone that a different DNA sequence cannot produce a similar morphology? “Not uncommon” and “common” differ in length by five letters, and yet the meaning is pretty much the same. Convergence of form in similar environments does not demand a convergence in genotype. So in this case, a large change of genome results in a small change in morphology.

What then of the opposite case? That of the “hopeful monster”. Some radical change in morphology would surely demand a large genetic change? Not necessarily. Innovation in evolution may take many steps [all of whom must be beneficial] for a large change in morphology. The vertebrate of cephalopod eye, the wing of a bird or bat. But are all such innovations done in tiny steps?

What of the turtle? This looks like a pretty radical re-design to me? Can natural selection handle this sort of radical departure? What would the genetic changes look like? A [supposedly] radical re-design would require natural selection to re-write the code, and make many changes all at once?

Well, actually no. Most vertebrates have their scapulars [shoulder blades] outside their ribs, and in turtles the shoulder girdles are inside their rib cage! One can see our god puzzling over this one. But he is a smart dude. Well, sometimes he is very smart, but the Recurrent Pharyngeal Nerve just caught him on a bad day??
But what of this evil evolutionist doctrine of natural selection? Can it redesign the turtle? Oh yes it can. Dumb as bricks, and it can do it with only a couple of changes! Normally, the genes responsible for ribs program the outer part of the rib primordia to grow much faster on the outside than the inside. [Actually as cartridge, but let's not quibble, the bone comes later]. The result of this “rib tissue” growing faster on the outside it that the left and right ribs grow towards each other, and often meet at the sternum, thus forming the familiar rib cage.
Now what happens if the relative timing is tweaked? Make both the inner and outer sides of the rib grow at more like the same rate. The result? The ribs grow more or less horizontally from the spinal column. The effect of that is that the shoulder blades can end up on the inside of the ribs, instead of the outside. All is required is that the outer rib tissue grow a little more slowly, or the inner rib tissue grows just a bit faster. After that, it is just a case of tweaking-dotting the “I's” and crossing the “T's”. You have a viable hopeful monster. From a tiny bit of developmental timing. This is no Michelangelo at the Sistine Chapel; this is a tiny heterochronic change. Well within the ability of a random change in a Hox gene. Dumb as horse shit. If this new developmental tweak works, then hey presto, natural selection will give it the nod. And as well know, it did. Granted, it is a bit weird-looking. But now, I hope you can see, not the stuff where one would have to invoke gods, or call on the design services of a Leonardo DaVinci.

But so far I have just given you a “just so” story. I can't do much about our living fossil, but what of our weird hopeful monster, the humble turtle?

Turtles and tortoises belong to the Order Celonia. “The two main components of the shell are the dorsal the dorsal carapace and the ventral plastron.” (Carroll, et.al. 2005: p.180). Two genes, Fgf10 and Msx [which are normally associated with limb bud growth are involved seem to have been co-opted into the development of the ribs. The structure in the turtle embryo is called the carapacial ridge. That's it really. Not hundreds or thousands of new genes required to make this huge innovation happen. Just a case of a couple of limb patterning genes getting mixed up in the formation of the ribs bud growth. A couple of pathways got their wires crossed a little, causing ribs to grow straight instead of curved. Of course, god could have done it. But so could have natural selection. I don't see why we have to invoke god here. Dumb old natural selection did it again. Turtle carapaces are no more mysterious than eyes or wings. A hopeful monster triumphs again, and does it on a shoestring of a couple of genes that got a little “lost”. Science is about methodological naturalism. It works. Hopeful monsters do not triumph that often, but this one did. No magic involved, as far as anyone can tell.

So what have we learned? Neither living fossils nor hopeful monsters seem to violate natural law. Living fossils don't change much, and as far as we know, but can still evolve [change over time] genetically. Hopeful monsters can adapt a new bauplan, a radical departure in “design” with couple of genes that got a bit lost, causing a change in rib geometry that left the shoulder blades inside, rather than on the outside. And the designer? Natural selection I would warrant. Assigning god as the causative agent to design the wing of a peregrine falcon sounds real grand. But then you also have to credit him with the recurrent pharyngeal nerve fiasco and piles in your bottom. Nor does the weird and hopeful monsters like turtles and platypus do you much good either. Because they are not all that hard to build. Not really. No forethought, no design. Just replicators such as genes reproducing like mad, making a few errors, and the natural filter of natural selection does the rest. And time. Information does not come from some magic man, it comes from the environment. Badly copied genes trip over the “right” solution. They don't even know what they are doing. They are just polymers of deoxyribose nucleic acid, for goodness’s sake. And thank goodness for that. I understand that the illusion of design is strong. The wonder of it all is immense. Creationists, I know you want to give thanks to something. I do too. But credit where it is due. Unless you can give me some real evidence of your celestial watchmaker, I am going to give good old Mother Nature the credit. She is called Natural Selection. Is it not time you did the same?

REFERENCES:-

Carroll, S. B. (2005). "FROM DNA TO DIVERSITY: Molecular Genetics and The Evolution Of Animal Design". Oxford, Blackwell.

FREEMAN, S. and J. C. Herron (2007). "EVOLUTIONARY ANALYSIS". New Jersey, Pearson, Prentice-Hall.

FURTHER READING:-

http://en.wikipedia.org/wiki/Coelacanth

http://www.sciencemag.org/content/325/5 ... ce.1173826

http://en.wikipedia.org/wiki/Hopeful_Monster

http://en.wikipedia.org/wiki/Living_fossil

http://www.pbs.org/wgbh/evolution/index.html
Jayjay4547 wrote:
"When an animal carries a “branch” around as a defensive weapon, that branch is under natural selection".
Darwinsbulldog
 
Posts: 7440
Age: 62

Print view this post

Re: 2nd Monthly Science writing Competition - Submissions

#5  Postby MedGen » Dec 19, 2010 11:03 am

Not in my Genes! - A common misconception in human genetics


Setting the scene

In Andrew Nicol’s 1997 film Gattaca the protagonist sets the scene of a dystopian society ruled by genetic determinism. What makes the film so terrifying to many is the beginning sequence that describes this none-too-pleasant world as “...the very near future...” suggesting the hard reality of a world ruled, and our lives defined, by our genomes.
Vincent, Ethan Hawke’s character, we soon find out, is part of an underclass of citizens defined not by their socioeconomic status, but by the very chemical they owe their existence too. As our hero states with frightening indifference, they have discrimination down to a science.

Gattaca shocked audiences when it was released and prompted many to question the ethics of human genetics research. I aim to show why those fears are misplaced, and dispel the myth of genetic determinism, including its cousin, the common misconception known as “The Gene for X”. Hopefully along the way I will be able to shine a little light on the beautifully complex world of how our genomes shape our very form, and our fates...

I often hear or read about genetic discoveries in the media accompanied by the catchphrase “scientists have discovered the gene for [X]”. In fact just whilst writing this I’ve heard this same old phrase trotted out with latest research on a gene called DRD4 and an apparent link with promiscuity in human adults1. So apparently, and this is only according to the media I must stress, if you have a particular variant of this gene you are more likely to have problems with fidelity! Cue the many claims by less-than-faithful partners “but my genes made me do it!” It is this kind of reporting that misleads the public about how our genomes really work. Worryingly it is not just the public that are suckered in by this grossly simplistic view, many practising doctors have only a rudimentary understanding of how our genetic make-up really works, and are thus likely to pass on this mis-information to their patients.

The bottom line is that our genetic constitution does not define us; it influences us to varying degrees. At this juncture I must point out that this applies to all aspects of genetics, including what are called Mendelian disorders; the classically defined genetic diseases such as cystic fibrosis and sickle cell anaemia. I know what you’re thinking – “He’s lost the plot! I know that genetic mutations cause these diseases!” and of course you would be correct, however, there is a very important, if somewhat subtle detail that must be taken into account. A little background in molecular biology will hopefully shed some light on this conundrum.

Genetics 101

There is phenomenon or concept, if you will, in molecular biology that explains how information encoded in a DNA molecule is translated to a functional protein. It is proteins that do most of the work in our cells and bodies. They turn genes on, and off, they help us breakdown foodstuffs and extract energy from what we digest, as well as controlling various aspects of our immune systems and how we grow as a foetus. Needless to say they are rather important. This phenomenon is known as the central dogma, and it states that information flows from DNA to a related molecule called RNA, before it is finally translated into a sequence of amino acids which are the building blocks of proteins. So we can say that the passage of information follows this sequence:

DNA->RNA->Protein

Whilst there are some important exceptions to this rule, information does not flow the opposite way from a protein to create a new DNA sequence. This is very important because if we change information in the DNA molecule that encodes the protein (i.e. a gene) we can change a part of the protein itself. Et voila! A mutation.

Mutations can occur in all sorts of places in DNA, and can subsequently have all sorts of different effects on not just the protein itself; it can change the shape of the protein, and thus how it works, or it can stop it from interacting with other molecules, or it can give it a new function entirely. As a result a mutation can cause catastrophic damage to our cells, and our bodies, leading to a genetic disease such as cystic fibrosis or sickle cell anaemia. What is very important here is to understand that each protein does not work in isolation; it fits into a very complex machine with hundreds, often thousands, of finely tuned interacting components. Much like a watch that relies on a number of different cogs turning at different rates, thus a cellular network of proteins relies on the careful regulation of the function of interacting networks of protein molecules. The area of biology tasked with understanding these interacting networks is termed systems biology, and is a relatively new player on the field of molecular biology. Previously scientists, and in fact they still do, study proteins and biological molecules in isolation, an approach called reductionism. This approach allows scientists to break apart complex problems and study them a little bit at a time. Unfortunately it this approach that has had a knock-on effect; our interpretations are also affected by this reductionist methodology.

So what has this got to do with genetics and genetic diseases? A few cogent examples may help to explain this. There are many mutations in a particular gene that cause cystic fibrosis (CF); the cystic fibrosis transmembrane conductance regulator, CFTR. Cystic fibrosis, like any disease, is a collection of phenotypes (symptoms) that manifest themselves together in a person because of a common cause, in this case a mutation in our gene CFTR. This is where we need to be careful; this gene does not cause CF. Specific mutations within this gene lead to the formation of multiple phenotypes that co-segregate together. That doesn’t mean necessarily that any mutation within this gene will always cause CF, in fact some mutations lead to a related, but less severe condition called chronic bilateral absence of the vas deferens (but only in males of course!). We find a similar scenario in sickle cell anaemia. Mutations in one of the proteins that make up haemoglobin, the protein that is responsible for carrying oxygen in red blood cells so that it can be fed to all of the tissues of our bodies, leads to the formation of stiff fibres in red blood cells that causes them to take on a characteristic “sickle” shape. These sickled cells do not carry oxygen to tissues and just clog up small capillaries due to their inflexibility. These cells get damaged and die, and also rupture small blood vessels, the result of which is anaemia and chronic internal bleeding. This condition is not the same in everyone though. There are a number of different factors that can affect how severe the disease becomes, some of which may also be genetic, but importantly some of these can be environmental.

In both of these scenarios mutations can be associated with the disease as a whole, but very rarely is the phenotype mapped 1:1 with the mutation; this is one of the major difficulties of molecular pathology. Each of these mutations are generally rare (the examples of CF and sickle cell anaemia are more common in certain populations for other interesting reasons that cannot be explained here), so what about the 10 million or so common variants we all carry around with us?

There are a number of different mutations that are so common within the human gene pool (or any species gene pool for that matter) that they are instead referred to as polymorphisms (meaning many bodies). Some of these have an effect on protein function and thus the observed physiological phenotype, whilst others may affect the regulation of a gene. Importantly some of these may have no function at all, a fine example of truly neutral mutations.

Image
From ref[2]

The diagram above illustrates the link between how common a variant is and how large the effect it has on the phenotype, called its penetrance. Mutations that have a high penetrance are those that cause Mendelian diseases like sickle cell anaemia and cystic fibrosis. If a mutation does not cause a trait to be expressed and only influences it modestly then we say it has an incomplete, or lower, penetrance.

Enter the Post-Genomics Era

In recent years a hypothesis was developed within human genetics research that perhaps it would be possible to investigate the genetic component of complex characteristics, polygenic traits that are the summation of multiple interacting genetic variants all with small individual influences on the trait of interest. This trait may be a continuous trait such as adult height, or intelligence, or it may be a binary outcome such as the occurrence of a disease such as cancer or type-2 diabetes (called adult-onset diabetes). Thanks to technological advances the tool used to investigate this hypothesis is called the genome-wide association study; the aim being to investigate as much of the genome at once to try and capture any potential influence on the trait of interest.

Thousands of these genome wide studies have been performed in the last 5 years or so attempting to discover the genetic component of a wide range of diseases and characteristics. The result has been a mountain of associated genetic variants, with very little functional consequence to compare them to. It is though that partly this may be due to type I errors, false positive results. An alternative, more...biological explanation, is that because each variant has such a small impact on the trait or disease that changes in the way they are classified will be able to highlight clearer associations.

Another major issue with this common disease-common variant hypothesis is that very little of the variation between individuals can be explained with these genetic variants alone. Some have suggested that perhaps they have very little role to play - cue the Nature vs Nurture debate. Importantly it is how our environments and genomes interact that modulates any influence either may have on the outcome of a particular characteristic.

Anything for a quick buck?

No discovery would be complete without someone trying to make a quick buck or two from it. Genetics research is no different and in recent years several companies have sprung up that will offer you a read out of parts of your genome for the modest sum of $500. These companies copped a lot of flak recently from the FDA, not because of the potential problems with interpreting the results of the tests they sell, but that they are associated with diseases and thus come under the guise of medical diagnostics which require strict FDA regulation, i.e. the FDA wanted to police them so they attempted to use the flimsiest possible reason for classifying them as diagnostic tools. As a result these direct to consumer genetic tests have had a bit of a rocky infanthood for political, not scientific, reasons.

What about the scientific reasons?

Importantly, and this has been a little overshadowed by the FDA’s heavy handedness, what these companies are offering is not so far removed from the scenario we see moments after the birth of Vincent in Gattaca. They are providing discerning customers with a prediction of their health on the basis of their genetic make-up. Whoah! Hang on! Does that mean we have already entered our Gattaca-style world without realising? In short, no. In long, what these companies provide is a very crude prediction of the impact of individual impacts on the customers health for a range of different diseases, and a number of superficial traits, such as wet or dry earwax, ability to taste a bitter chemical called phenylthiocarbamide (PTC), and whether you have curly hair or not (I’m not sure if I need a genetic test for two of those), but hey! It’s just a bit of fun right? Well, yes and no. Yes because it helps to bring the complex world of human genetics into the public domain, but also no, because people who take these tests may take the results to their local GP and find out that their knowledge of these tests is inferior to their respective Wikipedia page.

Let’s take an example. A variant within an important gene involved in the functioning of our immune systems called IL7RA has been associated with a slight increased risk of developing multiple sclerosis. Now that sounds quite serious to be at an increased risk of developing such a debilitating autoimmune condition. No need to go and get a wheel chair quite yet, put the phone down, there’s no need to book an appointment with your GP, we need to look at the level of the risk first. In the study itself people who carried two copies of this particular polymorphism were 1.08-fold more likely to develop MS compared to a population of healthy individuals. One...point...zero...eight fold. The average risk of Joe Bloggs on the street is 0.70 in 100, i.e. a 0.07% lifetime chance of developing MS in the absence any other information that might increase your risk. Carrying two copies of this variant increases that lifetime risk to, wait for it...0.096%! Even these risks are dependent on your age and your ethnicity. Not all human populations even carry around the same genetic variants. Of course if you do end up developing MS you can be sure that those little mutations may, not will, have had an influence on when you develop the disease, and potentially how mild or severe it is.

This example is fairly typical of most genetic variants that affect our chances of developing a particular disease or reaching a particular height. Even if we manage to discover all of the genetic influence on diseases like MS we still won’t be able to give a 100% certitude that you will or won’t develop the disease because the genetic component probably only makes up 30-40% of the factors that influence risk of disease. Or for a continuous trait they only explain a portion of the variation between individual people.

Tying up Loose Ends

How does this relate to our “Gene for X” misconception? Hopefully the more astute amongst you will have noticed I have not said that a particular gene is the cause for any disease or trait, rather it is specific variants that we find associated with characteristics. So next time you hear that scientists have discovered the gene for coffee drinking, or some other equally inane human behaviour, take a second to recall that obscure essay you read on an internet forum once, and think about how likely it is that there is a single gene that controls whether you like your caffeine from coffee or tea.

All of our behaviours, characteristics and risks of disease are of course going to be influenced by our genetic make-up. I hope I have been able to paint a picture of how the summation of our genetic variation we carry with us plays such a major role during the course of our lives, but that it does not define who we are.

References

[1] Garcia et al (2010) Associations between dopamine D4 Receptor gene variation with both infidelity and sexual promiscuity. PLoS ONE 5(11): e14162. doi:10.1371/journal.pone.0014162

[2] McCarthy et al (2008) Genome-wide association studies for complex traits: consensus, uncertainty and challenges. Nat Res Genet 9, 356-369

[3] Zhang et al (2005) Two genes encoding immune-regulatory molecules (LAG3 and IL7R) confer susceptibility to multiple sclerosis. Genes Immun 6(2):145-52.

Edit: Typo correct - "differens" to "deferens"
Last edited by MedGen on Dec 19, 2010 11:42 am, edited 1 time in total.
The nature of reality is not subject to the decrees of human institutions

User avatar
MedGen
 
Posts: 753
Age: 33
Male

Country: UK
United Kingdom (uk)
Print view this post

Re: 2nd Monthly Science writing Competition - Submissions

#6  Postby Darkchilde » Dec 19, 2010 11:14 am

DEBUNKING ASTROLOGY


Abstract:
Astrology is false.

Astrology: what it is and what does it claim?

Astrology is the belief that the position of the Sun, Moon and planets in the ecliptic during birth of a person influences via a mysterious force the character of a person, and can determine his future.
Astrology claims that there are 12 signs in the ecliptic or zodiac, and that the Sun, Moon and planets spends one month in each sign. The 12 signs and their dates, according to astrology, are as follows:

NAME SIGN DATES
ARIES 20 March - 20 April
TAURUS 20 April - 21 May
GEMINI 21 May - 21 June
CANCER 21 June - 22 July
LEO 22 July - 23 August
VIRGO 23 August - 23 September
LIBRA 23 September - 23 October
SCORPIO 23 October - 22 November
SAGITTARIUS 22 November - 22 December
CAPRICORN 22 December - 20 January
AQUARIUS 20 January - 18 February
PISCES 18 February - 20 March


The Mysterious Force that governs our lives, according to astrology.

But what is this mysterious force that emanates from the stars and planets? From the Moon and the Sun? That has so much influence in our lives? If we listen to astrologers, we are looking at a force that is not influenced by distance or mass.
There are four fundamental forces in the universe: the weak interaction, the strong interaction, electromagnetism and gravity. We can discount the strong and weak interactions as they have no bearing in massive bodies, those forces are responsible for reactions and effects within the atoms, like decay and nuclear fission. We are left with two forces, gravity and electromagnetism.

Let's take a first look at electromagnetism. Immediately, we ran into the first problem: the planets are electrically neutral! There is no electricity coming from Mars or from Jupiter. And even if there was, the effect of electricity diminishes with distance. How about magnetic fields? The Sun's magnetic field is huge, and does influence the Solar System. However, not every planet has a magnetic field of the same strength, and again, the influence of magnetic fields diminish with distance.
We have eliminated electromagnetism.

How about the last force, gravity? Again, we ran into a major problem: gravity depends on mass and distance. the more massive an object is, the more gravity but the gravitational attraction is less and less the further away we are from the object. So, Mars does not exert the same gravitational influence as Jupiter does, and of course, the Sun is the most massive object in the Solar System.

Maybe there is another mysterious force out there, that influences our lives the way astrologers claim. If that were so, then scientists, especially astronomers, would have some evidence for this force. There would be some repeatable patterns, during births, or during major events, there would be something that scientists, actual scientists, would be able to measure. Even astrologers do not know what this mysterious force is!

The Constellations of the Ecliptic: 12 or 13?

Astrology is basing themselves into dividing the ecliptic into 12 equal parts. According to them, the apparent motions of the Sun and the major planets take equal times to traverse each sign.

First of all, there are 13 constellations in the ecliptic. There is also the constellation of Ophiucus. Have you ever known anyone whose star sign is Ophiucus? Let's look at the 13 constellations of the ecliptic, or more commonly known the Zodiac, and the actual times when the Sun is seen in that constellation:



NAME SIGN DATES
ARIES 19 April - 14 May
TAURUS 14 May - 21 June
GEMINI 21 June - 21 July
CANCER 21 July - 11 August
LEO 11 August - 17 September
VIRGO 17 September - 31 October
LIBRA 31 October - 21 November
SCORPIO 21 November - 30 November
OPHIUCUS 30 November - 18 December
SAGITTARIUS 18 December - 21 January
CAPRICORN 21 January - 17 February
AQUARIUS 17 February - 13 March
PISCES 13 March - 20 April


Let's compare the two tables. What is the first thing that we notice? That the Sun does not move through each constellation in approximately a month as astrologers claim, but that the times the Sun needs to traverse each one, vary from 8 days to more than 40 days! And of course, there is a shift of at least one sign to where the Sun is compared with the astrological table.

This diversity is mainly due to a natural phenomenon called precession of the equinoxes. This phenomenon is due to the motion of the rotational axis of the Earth. Because of gravitational influences, the axis traces a cone every 26000 years. The astrological tables as astrologers now use them, were compiled about 2500 years ago, in Babylonia. During that time, the axis has rotated by about 36 degrees, so its orientation has changed, and as a consequence, so have the apparent motions of the Sun and the other planets in the ecliptic.

The above shows that every calculation that astrologers make is simply wrong, because they have not taken into account actual natural phenomena, like the precession of the equinoxes, nor the fact that each sign does not occupy an equal area in the ecliptic.


Some interesting facts about the Solar System, stars, constellations and more, that astrology fails to consider.

Let's get back to the Solar System. Astrology does include the 9 planets in their predictions... 9 planets? But the current planets of the Solar System are 8! Pluto is officially a dwarf planet like so many other Trans-Neptunian objects. There are a number of dwarf planets that have been discovered like Sedna and Eris. How about the asteroids between Jupiter and Mars? Shouldn't they exert an influence? How about some of the more massive moons of Jupiter like Europa? Or Saturn's moon Titan? How about Pluto's moon Charon? The comets? The objects in the Oort Cloud? But, no, astrology does not recognize those. Only the 9... err 8 planets and one dwarf planet, plus the Sun and the Moon.

What about before the discovery of Uranus, Neptune and Pluto? What did astrologers do, before the discovery of those objects? Didn't they notice that something was wrong with their calculations and natal charts? No, they did not. And that is more evidence that astrology is false. Uranus, Neptune and Pluto were discovered exactly because of their gravitational influences on the other planets. Minor gravitational wobbles allowed astronomers to calculate that there were more planets in the Solar System, to infer their positions and their mass as well.

Now, let's look at the night sky and the constellations. Imagine those constellations in 10.000 years from now (ten thousand). What do you see? If you are an astrologer, you see the exact same constellations in the exact same positions as today. If you are an astronomer, you cannot visualize the constellations.

The Sun and the stars in our galaxy orbit around the galactic centre. That means that even 2500 years ago the stars were not in the exact same positions as they are today, but have shifted, due to those motions. We cannot perceive this in our lifetimes because of the enormous distances and small velocity that both the Sun and the other stars have. The Sun needs about 225-250 thousand years to make a full orbit around the galactic centre.

Just look at Barnard's star which is a star that moves quite fast, and its movement can be seen from year to year:

Image

Stars also have a limited lifespan. It is huge compared to a human's life, but as everything else they die. Some in huge explosions known as supernovae, some more quietly.

And so, constellations change in the night sky. Not as fast as a human's life, but slowly through the ages. Enough though, that since the time of Babylonians to today, there are perceptible changes in the night sky.

Conclusion

There is no force to account for the mysterious force of astrology. The Sun does not spend the same amount of time in each constellation. There are more objects in the Solar System than astrologers can and do "calculate" for. Astrology forgets the precession of the equinoxes, the actual motion of the Sun and the stars, that contribute to the constellations changing shape.

The conclusion is the same as the abstract: astrology is false.

References:

  1. Plait, Phil (2002) Bad Astronomy (Chapter 21: Mars is in the Seventh House, but Venus Has Left the Building: Why Astrology Doesn't Work. pp 212-220) Wiley.
  2. Sagan, Carl (1996) The Demon Haunted World: Science as a Candle in the Dark (Chapter 17: The Marriage of Skepticism and Wonder, pp 302-304) Ballantine Books, New York
  3. http://en.wikipedia.org/wiki/Astrology
  4. http://en.wikipedia.org/wiki/Zodiac
  5. http://en.wikipedia.org/wiki/Barnard%27s_Star
  6. http://www.badastronomy.com/bad/misc/astrology.html
  7. http://skepdic.com/astrolgy.html
  8. http://www.astrosociety.org/education/astro/act3/astrology3.html
User avatar
Darkchilde
RS Donator
 
Posts: 9015
Age: 48
Female

Country: United Kingdom
United Kingdom (uk)
Print view this post

Re: 2nd Monthly Science writing Competition - Submissions

#7  Postby hackenslash » Dec 19, 2010 4:23 pm

Order, Order!

Entropy in Thermodynamics, Statistical Mechanics, Evolution and Information Theory.


One of the most common misconceptions, and possibly the one most difficult to properly treat, is that evolution is a violation of the law of entropy. In this essay, I want to provide a comprehensive treatment of entropy. This will not be easy in the word limit (nor in the time I have to write this), not least because it is a concept horribly misunderstood, often because of the way it's treated in popular science books by people who really should know better. By the time I am finished, one of two things will have happened. Either I will have simplified entropy to the state where it can be easily understood, or I will have demonstrated that I don't actually understand it myself.

So what is entropy? Let's begin with what it is not, and a quote from a paper specifically dealing with entropy in evolution:

Styer 2008 wrote:Disorder is a metaphor for entropy, not a definition for entropy.

Metaphors are valuable only when they are not identical in all respects to their targets. (For example, a map of Caracas is a metaphor for the surface of the Earth at Caracas, in that the map has a similar arrangement but a dissimilar scale. If the map had the same arrangement and scale as Caracas, it would be no easier to navigate using the map than it would be to navigate by going directly to Caracas and wandering the streets.) The metaphor of disorder for entropy is valuable and thus imperfect. For example, take some ice cubes out of your freezer, smash them, toss the shards into a bowl, and then allow the ice to melt. The jumble of ice shards certainly seems more disorderly than the bowl of smooth liquid water, yet the liquid water has the greater entropy. [1]


The problem is that there are different definitions of entropy from different branches of science and, while they are all related, they're not actually equivalent. In thermodynamics, entropy is a measure of how much energy in a system is unavailable to do work. In statistical mechanics, it's a measure of uncertainty, specifically the uncertainty of a system being in a particular configuration. This can loosely be described as the number of different ways a system could be reconfigured without changing its appearance. This latter is also related to information entropy or Shannon entropy.

So, beginning with statistical mechanics: In statistical mechanics, entropy is a measure of uncertainty or probability. If a system is in an improbable configuration, it is said to be in a state of low entropy.

The classic analogy employed here is the desktop strewn with bits of paper. You can move one piece of paper without appreciably altering the appearance of the desktop. Statistically speaking, this configuration, or one of the many configurations it could have while still remaining untidy, is more probable than one in which the desktop is tidy. Thus, it is in a state of high entropy.

This, of course, is where the idea of entropy as disorder actually comes from, and it does not reflect the definition of entropy employed in thermodynamics, although there is a relationship, as I shall show.

Moving on, let's deal with the definition of entropy in thermodynamics. This will require the laying of a little groundwork:

Firstly, let's look at what the Law of Entropy actually states, as even this is a source of confusion.

The Second Law of Thermodynamics states that, in general, the entropy of a system will not decrease, except by increasing the entropy of another system. [2]

This is an important point. It does not state that entropy will always increase, as many suggest, only that it will not, in general, decrease. There are no physical principles that prohibit the persistence of a thermodynamic state (except with due reference to the Uncertainty Principle, of course). Indeed, entropy in this instance can be defined as a tendency towards equilibrium, which is just such a state! Measurement of this tendency can be quite simply stated as the amount of energy in a system that is unavailable to perform work.

It is probably apposite at this point to deal with the three main classes of thermodynamic system. [3]

1. Open system: An open thermodynamic system is the easiest to understand. It is simply a system in which both energy and matter can be exchanged in both directions across the boundary of the system. Indeed, it is not stretching the point too much to state that an open system is one with no boundary. We define a boundary only because that defines the limits of the system, but from a thermodynamic perspective, the boundary is only a convenience of definition. This is distinct from the two other main classes of thermodynamic system, in which the boundary actually plays an important role in the operation of the system.

2. Closed system: A closed thermodynamic system is one in which heat and work may be cross the boundary, but matter may not. This type of system is further divided based on the properties of the boundary. A closed system with an adiabatic boundary allows exchange of heat, but not work, while a rigid boundary allows no heat exchange, but does allow work.

3. Isolated system: An isolated system is a theoretical construct that, apart from the universe itself, probably does not exist in reality. It is a system in which no heat, work or matter can be exchanged across the boundary in either direction. There are two important things to note from my statement of this. The first is that my usage of the word 'universe' is in line with my standard usage, and does not describe 'that which arose from the big bang', but 'that which is'. The second is that we know of no system from which gravity, for example, can be excluded, and since gravity can apparently cross all boundaries, there can be no such thing as an isolated system within our universe unless a barrier to gravity can be found, hence the statement that there is probably no such thing as an isolated system except the universe itself.

Now that that's out of the way, let's attempt a rigorous definition of entropy in a thermodynamic sense.

Entropy in a thermodynamic system can be defined a number of ways, all of which are basically just implications of a single definition. Rigorously defined, it is simply a tendency toward equilibrium. This can be interpreted in a number of ways:

1. The number of configurations of a system that are equivalent.
2. A measure of the amount of energy in a system that is unavailable for performing work.
3. The tendency of all objects with a temperature above absolute zero to radiate energy.

Now, going back to our analogy of the untidy desktop, this can now be described as an open system, because heat, matter and work can be exchanged in both directions across its boundary. As stated before, this is a system in which statistical entropy is high, due to the high probability of its configuration when measured against other possible configurations (there are more configurations of the system that are untidy than there are configurations that are tidy). In other words, and in compliance with the first of our interpretations above, it is a system which has a high number of equivalent configurations, since there are many 'untidy' configurations of the system, but only a few 'tidy' configurations. To bring the desktop to a state of lower entropy, i.e. a tidy desktop, requires the input of work. This work will increase the entropy of another system (your maid, or whoever does the tidying in your office), giving an increase in entropy overall. This, of course, ties the two definitions from different areas of science together, showing the relationship between them. They are not, of course, the same definition, but they are related. It is also the source of the idea of entropy as disorder.

In evolution, the role of the maid is played by that big shiny yellow thing in the sky. The Earth is an open system, which means that heat, work and matter can be exchanged in both directions across the boundary. The input of energy allowing a local decrease in entropy is provided in two forms, but mainly by the input of high-energy photons from the Sun. This allows photosynthesising organisms to extract carbon from the atmosphere in the form of CO2 and convert it into sugars. This is done at the expanse of an increase in entropy on the sun. Indeed, if Earth and Sun are thought of as a single system, then a local decrease in entropy via work input by one part of the system increases the entropy of the system overall.

Now, a little word on information entropy:

In information theory, there is a parameter known as Shannon entropy, which is defined as the degree of uncertainty associated with a random variable. What this means, in real terms, is that the detail of a message can only be quantitatively ascertained when entropy is low. In other words, the entropy of a message is highest when the highest number of random variables are inherent in the transmission or reception of a message. This shows a clear relationship between Shannon entropy and the definition of entropy from statistical mechanics, where we again have the definition of entropy as uncertainty, as defined by Gibbs. A further relationship is shown when we look at the equations for entropy from statistical thermodynamics, formulated by Boltzmann and Gibbs in the 19th Century, and Shannon's treatment of entropy in information. Indeed, it was actually the similarity of the equations that led Claude Shannon to call his 'reduction in certainty' entropy in the first place!

Shannon:
Image

Gibbs:
Image

The relationship here is clear, and the motivation for Shannon's labelling this parameter 'Entropy'.

Now, a bit on what is the most highly entropic entity that is currently known in the cosmos, namely the black hole. In what sense is it highly entropic? Let's look at those definitions again:

1. The number of configurations of a system that are equivalent: Check. This can be restated as the number of internal states a system can possess without affecting its outward appearance.
2. A measure of the amount of energy in a system that is unavailable for doing work: Check. All the mass/energy is at the singularity, rendering it unavailable.
3. The tendency of all objects with a temperature above absolute zero to radiate energy: Check. The black hole does this by a very specialised mechanism, of course. Energy cannot literally escape across the boundary, because to do so would require that it travelled at greater than the escape velocity for the black hole which is, as we all know, in excess of c. The mechanism by which it radiates is through virtual particle pair production. Where an electron/positron pair are produced, via the uncertainty principle, at the horizon of the black hole, one of several things can occur. Firstly, as elucidated by Dirac [4] the electron can have both positive charge and negative energy as solutions. Thus, a positron falling across the boundary imparts negative energy on the black hole, reducing the mass of the singularity via E=mc2, while the electron escapes, in a phenomenon known as Hawking radiation, thus causing the black hole to eventually evaporate. This is Hawking's solution to the 'information paradox', but that's a topic for another time. Secondly, as described by Feynman [5], we can have a situation in which the electron crosses the boundary backwards in time, scatters, and then escapes and radiates away forwards in time. Indeed, it could be argued that Feynman, through this mechanism, predicted Hawking radiation before Hawking did!

Now, the classic example of a system is a collection of gas molecules collected in the corner of a box representing a highly ordered, low entropy state. As the gas molecules disperse, entropy increases. But wait! As we have just seen, the black hole has all its mass/energy at the singularity, which means that the most highly entropic system in the cosmos is also one of the most highly ordered! How can this be? Simple: Entropy is not disorder.

Finally, just a few more words from the paper by Styer:

Daniel F Styer wrote:This creationist argument also rests upon the misconception that evolution acts always to produce more complex organisms. In fact evolution acts to produce more highly adapted organisms, which might or might not be more complex than their ancestors, depending upon their environment. For example, most cave organisms and parasites are qualitatively simpler
than their ancestors.


So, when you come across the canard that evolution violates the law of entropy, your first response should be to ask your opponent for a definition of entropy, as it is almost certain that a) he is using the wrong definition and/or b) that he has no understanding of what entropy actually is.



References:
[1] Daniel F Styer Evolution and Entropy: American Journal of Physics 76 (11) November 2008
[2] http://www.upscale.utoronto.ca/PVB/Harr ... hermo.html
[3] http://en.wikipedia.org/wiki/Thermodynamic_system
[4] P.M Dirac The Quantum Theory of the Electron 1928
[5] http://www.upscale.utoronto.ca/GeneralI ... atter.html

Further reading:
Simple Nature: Benjamin Crowell
http://www.upscale.utoronto.ca/GeneralI ... tropy.html


!
GENERAL MODNOTE
Fixed a typo as requested. - Mazille
User avatar
hackenslash
 
Name: The Other Sweary One
Posts: 21158
Age: 48
Male

Country: Republic of Mancunia
Print view this post

Ads by Google


Re: 2nd Monthly Science writing Competition - Submissions

#8  Postby natselrox » Dec 19, 2010 8:06 pm

Canon in S(cience)

Debunking Common Sense Psychology


Canon (music): In music, a canon is a contrapuntal composition that employs a melody with one or more imitations of the melody played after a given duration (e.g. quarter rest, one measure, etc.). The initial melody is called the leader (or dux), while the imitative melody, which is played in a different voice, is called the follower (or comes). The follower must imitate the leader, either as an exact replication of its rhythms and intervals or some transformation thereof.[1]

header.jpg
header.jpg (18.99 KiB) Viewed 29738 times


Christmas time! You’re walking through the busiest street in town. Everyone’s out shopping. You jostle past moving blobs of wool and fur, dodge those pesky little kids and suddenly you see Mr. Jones in the cake shop, bargaining as usual. Cheeky old bugger! But he’s a nice man. Everything is normal, festive, and fine. Now stop for a moment. Think of what you were doing. “I was walking on the street, you dumbo!” Of course you were. But try and describe every action of yours in detail. You were seamlessly moving past obstacles without even paying conscious attention to them. You recognised Mr. Jones at the slightest glance through the corner of your eyes, although chances are slim that you had seen him previously in the same pose from exactly the same distance and angle.[2] Common sense tells us everything was normal and as they should have been. Science will help us to frame the right questions and see the extraordinary. Shall we search for the devil in the details?

You enter a toy-shop to buy a present for your nephew. You don’t exactly what kids like these days. Uno, Pokemon, Dinosaur models, books of stars... hmm... You are confused. You try to project your youth onto the frame of your nephew but you’re not entirely convinced. After all, you’ve seen him jumping up and down in his room with his Wii. You look around the shop. People are busy looking at different items. About 25 people are there in the shop. Are they all buying presents for Christmas? Maybe someone is buying a present for their child’s birthday. Maybe more than one of them are buying presents for their child’s birthday. Maybe their children share the same birthday. But what are the odds of that happening? Pretty slim, it seems. Forget it! So finally, you decide to pick a telescope for kids. Maybe he will show interest in Astronomy. Let’s buy the DVD box set of Cosmos with it as well! After you’re done with the billing and gift-wrapping, the girl at the counter tells that they have a little game for you. She shows you three boxes and tells that one of them has a present in it and the others are empty. You have to pick one. But here’s the catch. After you’ve picked one box, she is going to show an empty box from the remaining two and then you get the chance to switch to the remaining box or stick with your choice. What should you do? Common sense tells us, we either picked the prize-box or we didn’t. What difference does it make to switch? Guess what, your common sense has dumped you again. You will be wrong to not switch the box and you were wrong in the assumption about the chances of a birthday-coincidence. Once again, science will help us out.

You return home. But you can’t resist the temptation to open the packaging of the telescope and try it out once for yourself. Besides, you need to check if it’s working as well. It’s a clear starry night. As you set the telescope in your balcony, you hear the faint tunes of ‘Lucy in the Sky’ playing from a distant house. How you love the Christmas time! The starlight is coming from light-years away. Some of the stars are like our sun, some are a lot bigger, some are much younger, and some are approaching senescence. Carl Sagan’s voice resonates with the ambience; you are standing on a mote of dust trying to comprehend the nature and purpose of the universe. The astronomical dimensions seem incomprehensible. The age of stars, the age of life on earth are just numbers to us, intangible. It does not mean anything to say that the values of the observable are the eigenvalues of Hermitian operators of vectors. But it’s the only way we can represent reality. The world, as we see it, is incredibly complex and almost impenetrable to our common sense wisdom. As you stand contemplating these things, you suddenly realise that it’s Isaac Newton’s birthday. He once said, “If I have seen further it is only by standing on the shoulders of giants.” Indeed, science augments our ability to understand the universe by rendering our normative, interpretive and teleological common sense redundant.

We are primarily visual animals. In fact, as Richard Gregory pointed out, “We are so familiar with seeing that takes a leap of imagination to see that there are problems to be solved.”[3] Our intuition tells us that our eyes are like cameras, passively mapping the external world onto the retina and subsequently, the brain. This model of one-to-one mapping was the prevalent view for most part of history. In fact, the whole process of perception was thought of as a simple process of assembling elementary sensations in an additive way, component by component, till the eighteenth century and was endorsed by the likes of John Locke and George Berkeley. The balance tilted towards the creative model of perception from a passive phenomenon through the rise of Gestalt psychology in the early twentieth century by the German psychologists (Max Wertheimer, Kurt Koffka, and Wolfgang Kohler).[4] The best way to try and appreciate the creative aspects of visual processing is through illusions.

Image


In the above image, the top picture presents an ambiguous pattern. In the left-hand column, dots of the same colour are grouped together in rows/columns which causes the brain to see rows and columns. What's more interesting is that in the right hand column, the perception of the dots being arranged in rows and columns is being created merely by their proximity with the corresponding elements of the left-hand column.

illusions.jpg
illusions.jpg (24.49 KiB) Viewed 29738 times


Whenever we see something, some parts of the imagery is recognised as the object while the rest is relegated to background. This is famously illustrated by the Rubin's vase[6] where the viewer switches between seeing two faces and a vase. But the most famous works in figure-ground reversal was probably done by the Danish artist Maurits Escher who wrote, "Our eyes are accustomed to fixing on specific objects. The moment this happens everything around is reduced to background.… The human eye and mind cannot be busy with two things at the same moment, so there must be a quick and continual jumping from one side to the other."[5]The Kanizsa triangle illustrates how the brain fills in the triangle when actually none exist. In fact, as proposed by Koch and Crick, the illusory edges of the triangle are neurally represented in the same way as the real edges of a normal triangle.[7] The Müller-Lyer illusion shows how we use shape as an indicator of size and infer that the lines are unequal when in fact they are of the same length.[8]

In case of afterimages, we see an image after the cessation of the original stimulus[9]. In a somewhat opposite case, motion induced blindness makes the visible invisible by masking it with overlapping sensations[10]. A more dramatic example of this is flash suppression where an angry face is projected onto one eye and a random mosaic is projected onto the other eye. An observer with both the eyes open cannot see the angry face but shows fear responses in the brain. The angry face somehow fails to reach the brain pathways responsible for conscious perception.[11]

I mentioned a few illusions just to illustrate the idea that visual perception is a complex process involving both bottoms-up and top-down processing and involving multiple parallel pathways (conscious or otherwise) which 'bind' together to give rise to what we call 'vision'.[12] Or as Henry David Thoreau said, "The question is not what you look at, but what you see."

What about the box in the shop? Why was it better to have switched your original one? Let's see. So there are two possible scenarios in the beginning. Either you could have picked the box with the prize or an empty one. Had you picked the right box, you would be at a loss in switching. However, if you had picked an empty box, then after she showed the other empty box, you would have won the prize if you switched. Now, the second scenario is more likely than the first simply because, initially, there were two empty boxes compared to the one lucky box. Common sense, as we know it, would have significantly reduced your chances of winning the prize. [13]

Now to the birthday problem, what are the odds of there being a shared birthday among the twenty five persons present in the shop? What we have to understand is that, twenty five people means there are 300 pairs of birthdays between them and what you are actually doing is taking 300 pairs of birthdays and seeing if any one of them match or not. Now the odds do not seem so unlikely. In fact, a little maths will show that with only 23 people in a room, it is more likely than not that someone will share his/her birthday with another person in the room. Seems counterintuitive? Work it out for yourself! [14]

Now to the comprehension of astronomical dimensions, geological time and the weird nature of the physical sciences. You see, the caveman staring at the night sky would have had no way of guessing that the tiny little specks of light in the sky are in fact, giant balls of fire like the sun. It took us centuries of research and experimentation to frame a model of the universe that is getting closer to the the observed reality day by day. In the way we have rejected millions of hypotheses and are left with something that is weird, that is incompatible with our common sense and perhaps, in the words of Haldane, 'queerer than we can suppose'. Tiny mutations chiseling self-replicating molecules over billions of years can create sentient beings. We cannot envisage this yet all the evidence point to it. Relativity is nothing like anything but it explains what we see. Trying to make it simpler by dumbing down will not preserve it. Einstein said,"Make things as simple as possible, but not simpler." Quantum Mechanics is absurd but it is the closest that we have ever been to the truth. Common sense is not science. In fact, one of the purposes of science is to help us see through the veil of common sense.

As shown by the optical illusions, naive realism is what it is called, naive. Our behaviours are the result of chiseling by learning through experience on a broad, plastic canvas shaped by evolution. So our sensory inputs are highly processed and programmed to confer some sort of evolutionary and/or adaptive advantage needed for survival. Even if they don't, they have no obligation to conform to the reality. We need an objective mechanism to probe into the nature of reality, to unweave the fabric which nature uses to weave her tapestry. Clearly, our senses are not equipped to do that.

We are piss poor at calculating probabilities of events as has been documented by our reliance on the supernatural. Being unable to deduce the actual figures, we often ascribe natural phenomena to the supernatural. Our propensity to see miracles in ordinary events, the prevalence of superstitions across cultures, all stem from our lack of understanding. In our arrogant ignorance, we feel proud to be inhabiting this tiny little planet in this vast universe and assume that we have a special position.

In fact, if we can and shed these primitive notions of ours and adopt a scientific viewpoint, we'll see that the world as we saw it thorugh our common sense psychology is nowhere close to the reality. Only through the scientific method can we devise a meaningful model of the universe that has any resemblance to the truth.

I started with a Christmas theme. Having grown up in India, I'm not at all familiar with the Christmas spirit of the predominantly-Christian nations in the northern hemisphere. It sounds nonsense, but if I were to make a wish for Christmas, I'd say that I hope for a world where every child would get the opportunity to see the world through a scientific lens free of dogma. Trust me, it's much more beautiful that way.

Merry Christmas!




Further Reading:

[1] Taken from http://en.wikipedia.org/wiki/Canon_(music) Pachelbel and Douglas Hofstadter somehow had sex in my mind to give birth to the idea of framing this essay loosely on the style of a canon. :shifty:

[2] Face recognition is a highly evolved mechanism in humans with different circuits responding to various faces and this can be probed by electrodes. In fact, a series of wonderful experiments are being done by Koch and his team, one of which (The Marilyn Monroe Neuron) was wonderfully covered by Carl Zimmer here. In monkeys, Inferotemporal lobe neurons responding to specific angles of viewing of the face have also been identified (by D. Sheinberg and N. Logothetis). It is kind of dumbing down to say that there are specific neurons for these purposes. When I asked Mo Costandi about these, he replied, "Each cell is a node in a sparsely distributed network encoding memories of Marilyn Monroe. It likely responds to other stimuli under different circumstances. Overlapping distributed networks; no grandmother (or MM) cell." However the full discussion is beyond the scope here. However, we can appreciate the difficulty in identifying a face under almost infinitely different circumstances. This is painfully manifested in patients with prosopagnosia. For a touching account, read the case of Dr. P in Oliver Sacks' "The Man Who Mistook His Wife for a Hat".

[3] Richard L. Gregory; Eye and Brain ; 1966

[4] I got these primers about the origins of these schools of psychology from Kandel and Schwartz. But a better place to look at would be the Wikipedia page (http://en.wikipedia.org/wiki/Gestalt_psychology) and the links therein.

[5] Maurits Escher's paintings are works of pure genius. Do check them out! :cheers:

[6] http://en.wikipedia.org/wiki/Rubin_vase

[7] http://en.wikipedia.org/wiki/Kanizsa_triangle Francis Crick and Christof Koch proposed the activity principle which states that underlying every direct and conscious perception is an explicit representation whose neurons fire in some special manner. So, for the illusory edges of the triangle, there will be one or more groups of neurons explicitly representing the different aspects of this percept.

[8] http://en.wikipedia.org/wiki/M%C3%BCller-Lyer_illusion

[9] http://en.wikipedia.org/wiki/After_image

[10] Motion Induced Blindness is a wonderful gateway to the neural correlates of visual consciousness. These are loaded terms and are beyond the scope of discussion here. But I'll tell you to look at this video to get an idea of how we are using these glitches in the matrix to try and understand its inner workings. :grin:

[11] Read Koch's article in the Scientific American for a brief description of this wonderful experiment. Especially read the part where you'll be sexually stimulated in a Freud-esque way. ;)

[12] http://en.wikipedia.org/wiki/Binding_problem The binding problem has somewhat lost its glamour over the years but it still remains a favourite among certain school of philosophers (or at least one that I know) :P .

[13] The wikipedia article on the Monty Hall Problem is near perfect! http://en.wikipedia.org/wiki/Monty_Hall_problem

[14] http://en.wikipedia.org/wiki/Birthday_paradox For a lively discussion on Probability, Randomness and the Birthday Paradox, listen to the BBC Podcast here. Beware though, it features Brian Cox, Tim Minchin and Alex Bellos! :grin:
When in perplexity, read on.

"A system that values obedience over curiosity isn’t education and it definitely isn’t science"
User avatar
natselrox
 
Posts: 10037
Age: 105
Male

India (in)
Print view this post

Re: 2nd Monthly Science writing Competition - Submissions

#9  Postby katja z » Dec 20, 2010 10:40 pm

»The purest Sillian is spoken in the region of Dunts.«

»The purest English is spoken in the Appalachians/Inverness/on the BBC.«

»The purest French is spoken in the Loire valley.«

»The purest Spanish is spoken in Salamanca/Lima.«

»The purest Portuguese is spoken in Coimbra.«

This is a small selection from the collective wisdom of the internet. The list could go on ad nauseam, until we have run out of natural languages spoken today – and given that there are about 5,000 to 6,000 of them (the exact count depends on the definition of language, which is not nearly as clear-cut as most people imagine), this might take quite some time. Before undertaking such a mammoth task, then, it is worth asking whether this kind of common-sense statements actually mean anything. It would appear that the common sense says yes. I will argue that it may be common, but sense it is not.

Defining the myth

»Purity«, in fact, is one of those mythical beasts which everybody sort of knows about, but which nobody can really put a finger on. Possibly for fear of leaving dirty fingerprints? Be that as it may, as irreverent rationally inclined thinkers we must insist on prodding it a bit and see what happens. First, we need to ask what people might refer to when they make judgements about language purity, even though they are often vague on this themselves. Secondly, we will turn to the reality of language use and check these judgements against it.

Some years ago, the following question was asked at the WordReference forum: »I would also like to know what region is considered to be the zone in which the 'purest' English is spoken.« When challenged to explain his meaning, the poster added: »Every single language has a place where the language is supposed to be spoken best. (I'm not saying that I agree with this statement, but it's common knowledge, and it almost always coincides with the place where the language was born.)«

The two senses of the word »pure« put forward in this explanation, »purity« as referring to quality and »purity« as referring to origin, are not unrelated, but I find it useful to try and consider them separately. The first one is straightforward enough. My Collins dictionary says that »pure« can refer to »a perfect example of its type« or something that is »produced or practiced according to a standard or form that is expected of it«. In this sense, the purest Sillian is the most perfect form of Sillian, characterised by correct grammar, appropriate vocabulary and, very importantly, good pronunciation. The opposite of »pure« in this case would be »bad«, »broken« or »corrupted«. The second sense is somewhat akin to the concept of racial purity: pure as purebred, unhybridised, unmixed: the Sillian spoken in Dunts is pure in so far as it is true to the original Sillian, to its spirit and its true nature. The opposite of »pure« in this sense would be »hybrid«, »bastardised«. In both cases, the assumption is that there is one way to speak Sillian properly; the forms of the language that most people use are deviations from or degradations of this one proper template: a linguistic treason.

I will first try to explain just why this notion is silly, and not only for my newly invented language, Sillian, by drawing on sociolinguistics and historical linguistics; in conclusion, I will point out why you do not have to be a dunce from Dunts to fall for this particular myth.

The bogeyman of good language and linguistic variation

For any language, at any point in its history, there are a number of ways of using it. In sociolinguistics, we speak of lects: regional (dialects) and social (sociolects) linguistic forms used by specific speech communities, defined by factors such as social status, age etc. Given the definition, it comes as no surprise that lects are not discrete homogeneous forms either. They could be better described as only partially-overlapping clusters of linguistic features that shade into other lects: linguistic variation is often clinal. Lects may differ in any or all of the following: pronunciation (»accent«), choice of vocabulary, grammatical features, idiomatic expressions, pragmatic conventions etc. These language varieties are constantly-changing products of the speech communities' communication needs, which may include their identity strategies.

Well and good, but what is the relation between lects and what I will, for want of a better word, continue to call a language? How do we get from what is spoken in a Yorkshire village, in inner-city Lagos, or by members of the Queen's English Society, to »English«; from the speech of youngsters in a Paris banlieue to »French«? For analysing the emergent global system of world languages, a gravitational model has been suggested, in which languages are linked among themselves into regional constellations of influence by bilingual speakers (De Swaan, in Calvet 59). The same model can be used for micro-linguistic relations: lects are interlinked through »multilectal« speakers, whose communication practices make up the dynamic network that is their language. For depending on factors such as age, gender, profession, social background and social trajectory, each speaker has their own repertoire of a number of linguistic varieties that they use in different communication contexts (as well as some unique features, their own signature style, which is referred to as idiolect). In this sense, almost no-one speaks only one language, and no two »native speakers« speak the same language; and what is typical of a certain language across its varieties is a statistical effect of concrete language practices, not an effect of an underlying One True (or Pure) Form.

What is codified in grammar books and general dictionaries is not the whole spectrum of language practices, but the standard language. From the sociolinguistic point of view, this is just one lect among others. It is not even anybody's mother tongue (even though some sociolects are much closer to it than others); it is the variety we typically acquire through formal education, and use it in formal contexts: when applying for a job, writing a newspaper article or a novel, presenting our work at a conference, speaking on the television. But it is hardly the most appropriate form in all communication situations. If Eliza Doolittle went for a beer with her childhood friends and insisted on using her hard-won standard English, the effect would not be one of language purity or correctness, but of artificiality – and, very probably, arrogance and snobbery as well.

How a standard language is born is a complex story, one that, for reasons of space, I can only begin to tell. But the beginning, even simplified, is instructive. A standard language is typically based on a prestigious sociolect such as the language of a court (»the Queen's English«, »le français du roi«), or other influential social groups, typically city dwellers. It begins as just one way of speaking among others in a continuum of mutually intelligible dialects, but as it gathers influence, it begins to influence this continuum. Cities and/or courts, as centres of economic activity, political power and social prestige, act as gravitation centres for populations; and their dominant linguistic forms, therefore, act as gravitation centres for their plethora of lects, becoming the standard of »good« language practice, with far-reaching consequences for linguistic practices and representations (that is, ways how people think about different language varieties) within their spheres of influence. Depending on the geographic distribution of such centres, and on the identity politics of populations, such a dialectal continuum can be cut up into several chunks which are then labelled as distinct »languages«. Take, for instance, the Western-Iberian dialectal continuum, which shades into Portuguese on one end, Castillian on the other (and which is itself part of a wider continuum of Western Romance languages): nowadays, several other languages (Galician, Mirandese, Leonese, Asturian) have been recognised in the region, and the emergence of their new standards has restructured the perception of which dialect »belongs to« (and conversely, »deviates from«) which language. These are extremely difficult questions if you insist on the notion of languages as discrete entities, each with its own ideal type – but they are artefacts of a flawed perspective. In reality, there is no one inherently »correct« way of cutting up the continuum of real-life linguistic practices into »languages«, only more or less practical ones. But this more precise model of language variation means that we have just thrown any notion of pure Sillian out of the window.

We are left with the concept that it is extralinguistics factors that determine which central, prestigious variety the speakers of a particular dialect will gravitate towards, subject to shifting social and political landscapes. It is these factors, then, that account for the integration of speech communities into a wider sociolinguistic community which comes to be dominated by a common standard variety. It is also these extralinguistic factors that influence language representations. There is a considerable body of research showing that social attitudes towards particular social groups are translated into judgements of the correctness and/or aesthetic qualities of their language (cf. Preston and Giles & Niedzelski). We learn how to think about the various social dialects at the same time as we learn to use them – or even before and independently of that: many people who are unable to use the variety socially accepted as the standard, or »best« form of their language, will still use it (or their idea of it) as the yardstick to evaluate their own linguistic performance, a situation referred to as linguistic insecurity.

The spectre of authenticity and language evolution

By now it should be at least partly clear that the supposed original purity of Sillian – which I will refer to as »authenticity« – cannot fare any better under critical scrutiny as the purity of its form. I have already touched upon the subject of language evolution, but now is the time to take a more decided plunge into the subject.

On the face of it, language evolution seems straightforward enough. Modern English comes from Old English (with a messy episode of Norman French influence that complicates the story) and that, in turn, comes from the Proto-Germanic. French and Spanish and the rest of the Romance languages come from Latin. (The FSM knows where Sillian comes from.) In fact, it is not as simple as that; if it was, many a historical linguist would be out of their job. To start with, there is no single point in space and time at which a new language is born. The sudden emergence of languages in recorded history is an artifact of sparse evidence. This is not surprising, since spoken language does not fossilise, and until very recently, the use of writing (itself a relatively recent invention) was extremely limited. This also meant that only the most privileged language varieties ever made it into writing: the written record, far from reflecting the whole variability of language use, is heavily biased.

The metaphor of a branching tree that has often been used in historical linguistics is misleading. It disregards the basic fact that has already been stated: in any language, at any point in its history, there are a number of ways of using it. These linguistic practices keep changing as populations come into contact with each other, blend, shift to another lect (of the same language or of another one, in so far as the distinction makes sense), selecting or rejecting competing linguistic features in the process and thus gradually restructuring their language (cf. Mufwene). When an innovation appears in the written language, we can be almost certain of two things: that this feature had existed for some time in the language in competition with other features, spreading slowly and gradually becoming accepted in the prestigious dialect; and conversely, that the inclusion of this innovation into the standard language does not mean it has won acceptance in all of the lects within its sphere of influence.

So the evolution of any »one« language is not unilinear. This is not even true of its standard dialect. Thus, within the range of linguistic forms that we call Old English there were at least four prestigious varieties, corresponding to the centres of independent kingdoms. West Saxon became the dominant written form following the political unification under Alfred the Great, but lost this status after the Norman Conquest. The new standard written form which emerged in the 15th century, the so-called Chancery Standard, was not directly related to the late OE standard; rather, it was based on the language spoken in London (and specifically, at the royal court, where the use of English had begun to predominate at the time of King Henry V).

What, then, might be considered the historically pure origin of English, its authentic original form? Shakespeare is often credited with writing the »purest« English, but is that a logical choice? His Early Modern English, after all, is the result of a strong admixture of Norman French. Surely the Late West Saxon, which was used to record Beowulf, is the purer form? Or even some earlier stage in the continuous process of linguistic change, one uninfluenced by the Old Norse – but which? Or maybe we should look, not (only) further back in time, but (also) elsewhere, to nonstandard varieties, many of which have conserved traits of Old English that have disappeared from the standard dialect? Thus, when trying to define a »truly authentic« form of a language, we get caught in a truly nightmarish form of endless recursion. With language, it is not only turtles all the way down; it is also turtles all the way sideways.

Once again, the seemingly self-evident judgement of linguistic purity is based on purely extralinguistic factors. The two most important ones in this case are cultural prestige and the constructed national history. This is why no-one today would seriously argue that proto-Indo-European was the golden age of their language – as opposed to the Elizabethan era. But the emergence of modern nations as imagined communities (Benedict Anderson), and the role played in this process by the teaching of standard language and of literary canon, makes for a longer and more complicated story than can be told here.

Conclusion

Both senses of the word »purity« that I have identified rest on the notion that languages are discrete homoneneous entities. The continued hold of this notion is not simply the result of linguists' failure to communicate their findings to the general public (as pointed out by Bauer and Trudgill). For one thing, a large part of 20th-century linguistics built on just this assumption: this is true of both Saussurean linguistics, which treats language as a relational system, and of Chomskian linguistics, whose subject is »the ideal speaker-listener in a completely homogeneous speech-community« (Chomsky 3). These abstract approaches do have their uses (Haugen called this model a »useful fiction«; Calvet 10). They cannot, however, even begin to account for the complexities (or messiness, if you prefer) of real-world communication in natural languages; you might as well try to explain evolutionary adaptation without taking into account the ecology of a given species. What they get wrong is the very simple fact that »language« is an abstraction extracted from human communication practices – and not vice versa (cf. Calvet 6).

It is, however, true that for most people, their early encounter with prescriptive grammar in school remains their only contact with any kind of a formal approach to language – despite the fact that modern linguistics is overwhelmingly descriptive (that is, it attempts to describe how language functions, not legislate on how it should function). The prescriptivism informing language teaching in school naturally feeds into the purity myth: children are taught the correct way to use the language, with the implication that what they have learned to speak natively and continue to use in informal everyday communication is an imperfect form of this language, fallen from the grace of its original purity. The use of words such as »corrupted« to describe nonstandard dialects only serves to strengthen this perception.

And finally, it must be admitted that the kind of statements quoted at the beginning do refer to something very real – to an important aspect of the speakers' social reality. It is true that in every linguistic community, there are language varieties which are socially more desirable than others, for as we have seen, language representations are intimately linked with social attitudes. This is what makes myths about language so incredibly hard to shake off. On one level, they may be based on a misconception. But on another, as social constructs, they are part of the social and linguistic reality. As such, linguistic representations and attitudes have very real effects indeed, both on the speakers' (self-)perception and (self-)evaluation, and on linguistic and education policies.

References:

Bauer, Laurie and Peter Trudgill (eds). Language Myths. ePenguin, 1998. N. pag. E-book.

Calvet, Louis-Jean. Towards an ecology of world languages (tr. Andrew Brown). Cambridge, UK, and Malden, US: Polity, 2006. Print.

Chomsky, Noam. Aspects of the Theory of Syntax. Cambridge: The MIT Press, 1965. Print.

»The Cradle of English«. WordReference. Web. 16 Dec. 2010.

»Dialect Continuum«. Wikipedia. Web. 16 Dec. 2010.

Giles, Howard and Nancy Niedzielski. »Italian is Beautiful, German is Ugly«. Language Myths. (Bauer and Trudgill, eds). E-book.

»Middle English.« Wikipedia. Web. 20 Dec. 2010.

Mufwene, Salikoko. »Population Movements and Contacts in Language Evolution«. Journal of Language Contact, THEMA 1 (2007). 63–92. Web. 3 Dec. 2010.

Preston, Dennis R. »They Speak Really Bad English Down South and in New York City«. Language Myths (Bauer and Trudgill, eds.). E-book.

»Pure«. Collins Cobuild English Dictionary. London: HarperCollins, 1995. Print.

»Variety (Linguistics)«. Wikipedia. Web. 16 Dec. 2010.
User avatar
katja z
RS Donator
 
Posts: 5353
Age: 36

European Union (eur)
Print view this post

Re: 2nd Monthly Science writing Competition - Submissions

#10  Postby twistor59 » Dec 22, 2010 5:13 pm

Winging it


The myth I want to talk about is something that I was taught in a physics class in high school many years ago. Being a science geek at the time, I read a lot of popular books and saw this myth repeated in print more than once. The myth goes like this:

“In order to ensure lift is created, and aircraft wing is curved at the top and flat at the bottom. This means that the air has further to travel over the top and hence has to travel faster than the air flowing past the bottom. The air travelling faster causes a reduced pressure at the top of the wing due to Bernoulli’s principle and hence lift is created”

I will present two counterexamples to refute this explanation, and then will discuss the contents of the myth and highlight the right and wrong statements.

Firstly the counterexamples. Figure 1 shows a picture taken at RIAT (the annual airshow at RAF Fairford) of a pair of F16s from the US Thunderbirds display team. Immediately we see a problem with the given explanation. If true, an aircraft flying upside down should experience negative lift, since the curved surface is now on the bottom. This would be problematic for the display team !

F16.jpg
F16.jpg (14.6 KiB) Viewed 29642 times
Figure 1


Figure 2 shows a cheap little balsa wood model (it’s actually a glider, but I remember building models like that powered with an elastic band and propeller). The problem here is that the wing cross section is not aerofoil shaped – it’s flat on both top and bottom, so Bernoulli is not around to help us. (Incidentally, being a Brit I will use British terminology, i.e. aeroplanes and aerofoils rather than airplanes and airfoils).

Balsa.jpg
Balsa.jpg (66.77 KiB) Viewed 29649 times
Figure 2


The Bernoulli Principle – Cause and Effect

Firstly let’s examine the feasibility of using the Bernoulli principle to explain lift. The idea is that the fast moving air over the upper surface causes a reduction in pressure there which “sucks” the wing upwards. Bernoulli’s equation says basically, in the case of aerodynamics that

P+(1/2) ρV2 = constant along a streamline


where P is air pressure, ρ is its density and V is the velocity of the flow. Note that, although this equation is applied along a streamline, is can be extended to compare pressures between streamlines if we make the assumption that the flow is irrotational and the height difference between the streamlines is small [1]. From this equation, we can see that if the velocity of the flow over the wing is higher than the velocity of the flow under the wing, then the pressure over the wing must be lower than the pressure under it.

It is important to see that this equation merely states an inverse relationship between the two quantities, i.e. regions of low pressure are correlated with regions of fast fluid flow and vice versa. It does NOT say “fast fluid flow causes low pressure” or “low pressure causes fast fluid flow”.


Fallacious Origin of the Faster Over-wing Flow

Perhaps the greatest fallacy in the popular explanation of lift is the following statement, which is a desperate attempt to explain the fact that the over-wing flow is faster than the under-wing flow. The statement goes like this:
“The air flowing over-wing has a greater distance to travel compared to the air flowing under-wing, and hence must flow faster”

The incorrect assumption here is that the pockets of air which got separated at the leading edge must somehow “meet up” again at the trailing edge. A picture of what really happens is shown in figure 3.

flow.jpg
flow.jpg (15.73 KiB) Viewed 29642 times
Figure 3


Here the small coloured lines represent imaginary puffs of smoke injected by a vertical array of nozzles to the left of the leading edge. The puffs go off at regular time intervals, and each puff lasts a fixed duration. So at one time, the nozzles produce a band of red puffs. With no wing in the way, they would stay together forever. Fig 3 shows that, looking at earlier puffs, the orange ones have just hit the wing and started to separate top/bottom. In the green and blue cases, you can see that the top flow has moved much further along the wing than the corresponding-coloured bottom flow. It is clear that the top and bottom halves remain separated – they do not meet again, thus the requirement to meet again after travelling the longer over-wing distance as an explanation of the faster over-wing flow is false. Note there is a nice animation of this flow in the Wikipedia entry [2].


True Origin of Lift

There are two contributions to the lifting force:

(i) The pressure differential between the upper and lower wing surfaces.
(ii) The reaction force caused by the fact that the air leaving the wing has a downward component to its momentum

Looking at these in turn:

(i) Imagine a small cube of air passing along an over-wing streamline. Since the streamline is curved, the cube will exhibit a “desire” to move in the direction normal (and outwards) with respect to the wing upper surface (just as when whirling a bucket of water on a rope, the bucket desires to travel normal to the curve). This effect causes a reduction in pressure over-wing.

Conversely the pressure under-wing is increased due to the compression of the air resulting from the forward wing motion combined with the angle of attack.

The difference in pressure above and below the wing gives rise to a net force which contributes to the lift.

(ii) The airflow splits into two halves – one passes over-wing and one passes under. If there is a non zero angle of attack, or if the angle of attack is zero but the wing cross section has the classic curved aerofoil shape, the air leaving the wing will have gained a downward component to its momentum. Newton’s third law implies that the wing will, in turn, experience an upthrust, contributing to the lifting force.


The Counterexamples

Now that we know the true cause of lift, we can see how the counterexamples (flat wing cross section or inverted flight) fly.
Provided there is an angle of attack and the aircraft is propelled forward, the pressure differential between the upper and lower surfaces will exist regardless of the wing cross section (within reason!). Similarly, the angle of attack will ensure that the over-wing air has a downward vertical momentum component as it leaves the trailing edge, thus invoking the upthrust. Instrumental in this is the fact that the streamline closest to the upper side of the wing will try to follow the contour of the wing surface and hence will emerge downwards if the wing is angled upwards.


Why the Aerofoil ?

We have seen that the factors contributing to the generation of lift do NOT require the aerofoil cross section. Why, then, is this cross section almost universally employed in wing design ? The answer is efficiency. Although not absolutely necessary for lift, the aerofoil makes for a more efficient wing design.

To see this, consider a flat wing being propelled forwards. The angle of attack results in a downward momentum component of air flowing off the trailing edge and hence gives a contribution to lift. Increasing the angle of attack increases this lift, but there is a penalty. On the underside of the wing, the pressure build-up is greater with higher angles of attack. This pressure, of course, acts horizontally on the underside of the wing as well as vertically. The horizontal component of the force resulting from this pressure is experienced as drag, acting in the opposite direction to the driving force.

For normal (i.e. not takeoff or landing) flight, it is advantageous to reduce this drag as much as possible. To do this, we wish to reduce the angle of attack, but maintain a sufficiently high downward vertical momentum component of air flowing over the upper surface. The classic aerofoil design is efficient at achieving these objectives.


Stalling

Air flowing over the wing surface has a tendency to follow the contours of the surface [3]. To understand this, consider the fact that air has a small viscosity. The air layer “in contact” with the surface has a velocity relative to the surface of zero. As you look further out away from the surface this relative velocity will be higher, i.e. there is a velocity gradient. The consequence of this is that the flow will tend to curve towards the surface. This is known as the Coanda effect [4].
The layer of air immediately adjacent to the surface is known as the boundary layer. Under certain circumstances, the boundary layer separates from the surface. When this happens, the lift contribution from the upper wing vanishes and the aircraft stalls.


Summary

• The generation of lift from an aircraft wing is a complex process, which can only be comprehensively described using the principles and equations of fluid mechanics.
• There are two contributions to the lifting force – the downward momentum of air leaving the trailing wing edge, and the pressure difference between the underside and top side of the wing.
• The popular explanation of the faster over-wing flow velocity in terms of air above remaining correlated with air below is incorrect.
• In researching this issue using internet sources, great care must be taken to select sources which do not display any unreasonable bias. Amongst the popularisers, there is sometimes a tendency to endorse one ingredient in the explanation and refute all the others. To avoid this distortion, I’ve drawn my material from reputable authorities, such as NASA and Fermilab physicists.


References

[1] http://wright.nasa.gov/airplane/bern.html NASA Description of Bernoulli Equation

[2] http://en.wikipedia.org/wiki/Lift_%28force%29 Wikipedia Entry on Lift

[3] http://home.comcast.net/~clipper-108/lift.htm Anderson and Eberhardt “A Physical Description of Flight”

[4] http://en.wikipedia.org/wiki/Coanda_effect Wikipedia Entry on Coanda Effect


Edit for formatting
A soul in tension that's learning to fly
Condition grounded but determined to try
Can't keep my eyes from the circling skies
Tongue-tied and twisted just an earthbound misfit, I
User avatar
twistor59
RS Donator
 
Posts: 4962
Male

United Kingdom (uk)
Print view this post

Re: 2nd Monthly Science writing Competition - Submissions

#11  Postby theropod » Dec 23, 2010 9:58 pm

Popular misconception:

"All Prehistoric Beasts were Dinosaurs, and They Were All Huge"

In popular culture there exists a notion that any large animal that existed in the distant past was a dinosaur, and that all dinosaurs were huge lumbering giants. This group of misconceptions has been extended to cover anything from the Permian pelycosaurs Dimetrodon to the Pleistocene Wooly Mammoths. Pterosaurs, as well as marine reptiles such as Ichthyosaurs and Mosasaurs, have also been lumped into the false dinosaur group. This erroneous idea is the direct result of lack of proper education in combination with those holding an agenda to discredit the hard science of vertebrate paleontology. Books aimed at children often depict these other creatures as dinosaurs and the notion sticks with people the rest of their lives.

In order to understand why this is a problem we should first examine the morphological features that either include or exclude a creature from the group of organisms classified as dinosaurs. While it would be interesting and educational to examine the evolutionary pathways that lead to the emergence of dinosaurs that aspect of the matter is not the focus of this effort. Dinosaurs exhibit a specific set of features that distinguish them from all other creatures that have ever lived. While dinosaurs share a great many characteristics with other animals they also have exclusive skeletal elements only found in dinosaurs. Most notable of these features is the pelvic structure. At some point in the evolution of dinosaurs a split occurred wherein two families of dinosaurs emerged. The "Bird Hipped" Ornithischians and the "Lizard Hipped" Saurischians.

The common names of dinosaurs don't help the lay person gain any better understanding of what constitutes a dinosaur. Usually when such people think about dinosaurs they see one term or the other and come to the false conclusion that the dinosaurs with "Lizard Hips", for example, are somehow just big lizards. Even the term dinosaur translates to thunder lizard. Partly this confusion arises from the fact that the earliest paleontologists didn't really understand the fossils that they were finding and describing. There is no need to place blame here as without a base upon which they could build they were forced to work from what was already known. One of the first described dinosaurs, Iguanodon bernissartensis was named the way it was because the material looked, to those early workers, like large versions of known extant lizards. In hindsight these early efforts were clearly in error, but they didn't have the luxury of our current knowledge base.

The structural morphology, which supported the weight of dinosaurs during locomotion, is the key feature which sets them apart from all other creatures. In all other reptiles, including lizards, the legs are splayed apart in a bowing fashion, whereas in dinosaurs the legs reside directly under the body like pillars. Both the Ornithischians and the Saurischians display this characteristic, but differ in the supporting pubic skeletal elements. In the Saurischians, or "Lizard Hipped", dinosaurs the pubis is oriented more toward the the front of the dinosaur and in the Ornithischians, or "Bird Hipped", dinosaurs the arrangement of this feature is modified to favor an orientation toward the rear of the dinosaur.

Among the Saurischians one of the more commonly know groups of dinosaurs, the theropods, reside. This includes the giant Tyrannosaurus rex, which is not the largest theropod to ever exist, as well as the diminutive Microraptor zhaoianus, which had feathers on all four limbs and was a possible insectivore, although the type specimen contained ingested small mammalian skeletal elements that were discovered during preparation. The very largest known land animals to ever exist were the Sauropods and also fall within this classification. Evidence of the evolutionary relationship between Saurischians and extant birds is quite robust, and quite interesting, but again is not the focus of this subject.

Among the Ornithischians the horned Ceratopsia, armored Ankylosauria and well known duck billed Hadrosauridae all reside. Most, if not all, of these dinosaurs were herbivores.

When one considers the classification of a creature to determine if it lies within dinosauria one must also take into account the temporal setting in which the fossil is deposited. While many of the organisms incorrectly lumped into dinosauria were extant over the span of time when true dinosaurs lived this alone is an insufficient qualification. The pterosaurs and marine reptiles shared none of the key skeletal elements, and in fact differ greatly from the accepted standards cited earlier, they did share the environment with dinosaurs. It is also interesting that these animals suffered the same fate as the dinosaurs they lived alongside when the end of the Cretaceous epoch came. This temporal restriction automatically eliminates those creatures that preceded dinosaurs and those that arose after dinosaurs became extinct. Excluding birds, which evidence strongly suggests are a highly derived line of theropods, no good evidence exists that any non-avian dinosaur survived the Cretaceous-Tertiary boundary. Obviously the mega-mammals that arose after the K-T event fail to meet the temporal requirements as well as the skeletal construct true dinosaurs display.

In conclusion it becomes obvious that the dinosaurs were a highly specific group of creatures that were very successful over a very long span of time, and that lumping any prehistoric animal into this grouping is a mistake. While a great deal of progress has been made in educating the general public a continuing effort must be undertaken to assure that children, and interested adults, are not allowed to think of dinosaurs as anything other than the very specific creatures that lived and died so long ago.

References:

Weishampel, D.B., P. Dodson, and H. Osmólska (eds.). 1992. The Dinosauria. University of California Press, Berkeley.

Carpenter, K. and P. J. Currie, eds. 1990. Dinosaur Systematics: Perspectives and Approaches. Cambridge University Press, Cambridge.
-----------------

RS
17 years off-grid and counting.

Sleeping in the hen house doesn't make you a chicken.
User avatar
theropod
RS Donator
 
Name: Roger
Posts: 6828
Age: 63
Male

Country: USA
United States (us)
Print view this post

Re: 2nd Monthly Science writing Competition - Submissions

#12  Postby palindnilap » Dec 24, 2010 2:47 pm

Usage : Section 1 sets the essentials of the whole article. Each of the three next sections is then standalone and can (hopefully) be skipped without damaging understanding.

"See, I was right"
Debunking the outcome bias through playing backgammon


A pattern caught your attention while you were reading your local newspaper. First of all, you went to the report of yesterday's football, in which Blue beat Red by 1-0 in a rather lucky way. Red played fine but didn’t manage to score, hitting the bar three times instead. On the other hand, Blue produced almost no play but was awarded a very controversial penalty by the referee. Although the journalist objectively acknowledges Blue’s sheer good luck, he also seems to put high praise on its trainer for his tactical choice of going all for the defense; on the other hand, Red's trainer is criticized for having selected the attacker who happened to be involved in all of Red's three near misses. But hadn't the journalist told beforehand that Red had deserved the win?

Well, one doesn’t need to be the brightest light of the bulb in order to become a sport columnist, or so you think while turning to the local column, which features the trial of a drunk driver who has killed a kid on a pedestrian crossing. You rejoice at first about the two-year imprisonment sentence and the new-found severity in dealing with irresponsible driving. But you are then reminded of a case one year ago, when another drunk driver nearly missed a similar accident because a passerby saw it coming and managed to push the kid aside at the last second. That driver only had his driving license suspended for six months. Aren’t you beginning to ask yourself whether there isn’t something fishy in the way we assess our own acts after their consequences have been revealed?

1. The outcome bias in a backgammon endgame

The answer is, there is something fishy about it indeed, fishy enough for having been labeled outcome bias by psychologists. The outcome bias is the tendency to “judge a past decision by its ultimate outcome instead of based on the quality of the decision at the time it was made, given what was known at that time” [6]. As your newspaper showcased, that bias is extremely common, and does indeed deeply affect our ethical judgments and our legal system [1].

The clearest way to debunk the outcome bias is with the help of games that mix skill and chance. Besides being fun, those are probably the best available mind tools for rational choice, since they set players in a form of competition for rationality.

Backgammon will be my game of choice. If you have never played it, no panic! For understanding the examples you will need to know only the basic rules of checker movement, and the usage of the doubling cube; this page does a pretty good job at explaining those (“Movement of the Checkers”, rules 1-3, “Doubling”, paragraph 1-2).

outcome_bias_position_1.jpg
outcome_bias_position_1.jpg (38.21 KiB) Viewed 29604 times

Here is the last roll in a $10 money game, both sides being almost done driving their checkers home (they end their race at the right of the board). As White you are a huge favourite since you will win unless you roll very small, e.g. a 2-1. Seeing that, you offer the doubling cube to Bigfish. Then a classical scenario occurs: Bigfish accepts the cube, you tell him that he should have declined and lost only $10, you roll 2-1 and curse, he wins $20 and tells you the infamous “See, I was right to accept”.

Outcome bias, you think1, but what can you do about it? After all, it is not obvious that Bigfish really made a mistake; sure he was a underdog to win the game, but on the other hand, compared with the alternative of giving up $10, losing $20 instead is not as bad as winning $20 would be sweet. In order to find out what the rational decision would have been, we will need a simple probabilistic computation.

First of all, out of 36 rolls there are 7 rolls that fail to win the game : 1-1, 2-1, 1-2, 3-1, 1-3, 3-2 and 2-3 (2-2 is fine since four twos can be played). When Bigfish accepts the cube, he will win $20 if you roll one of those, and lose $20 if you roll any of the 29 other rolls. Suppose that you play that position 36 times and that Bigfish accepts the cube every time. On average he will win $140 (7*20) and lose $580 (29*20); he will end up $440 poorer. On the other hand, if Bigfish declines the cube every time, he will lose only $360 (36*10). That is why accepting the cube was indeed a poor decision. One can even quantify how poor: accepting the cube costs exactly $80 / 36 = $2.22!

You might protest that making the above mistake is not “exactly” the same as losing $2.22. Nevertheless, how similar it is follows from the law of large numbers [5], which in a nutshell states that luck always almost cancels out in the long run. So while the present decision may not cost Bigfish $2.22 (indeed it hasn’t!), if you play him long enough, you will win a lot of money if he continues to make such errors (and if you don’t!).

The above is meant to shatter two common fallacies: “I was right because I won” and “It doesn’t matter since luck will be deciding it anyway”. It should also clarify how professional players manage to make money so consistently in games like backgammon. One reason why they make more money than lesser players is that they have learned to compute rather than relying on outcomes only.

2. Bias or Adaptation ?

Building on numerous monetary examples similar to our backgammonish one, die-hard economists suggest that we should rate every decision by discarding outcomes completely and by computing mathematical expectancies like we did above. But have we really raised so high above our biological evolution? On the other hand, die-hard evolutionary psychologists argue that the bias label has often be sticked too hastily, and that it might be only cultural norms that depicts us as acting against our actual interests. Since our psychology has been shaped by natural selection, which shows a tremendous record of being smarter than we are, telling that we know better could be a sin of vanity.

One more concrete objection is that the structure of the real world tends to be less stable and predictable than the experimental settings. Another objection is against the general requirement for experimental subjects to dismiss irrelevant factors (e.g. the likely social status of the experimenter), although in real life these factors can easily be relevant. In the concise words of Steven Pinker, “outside of school, of course, it never makes sense to ignore what you know” [3].

Can such objections apply to the outcome bias? Definitely! The first point is that reality might not always be what you think it is. For instance, people who discard outcomes completely would never suspect a cheater who performs a little too well to be honest. And what about social factors? In all societies, people who have enjoyed a lot of good outcomes tend to move towards the high end of the power scale, and it might be an adaptive strategy to have might make right. Maybe flattering winners is just erring on the side of caution.

Nevertheless, Pinker’s nice aphorism is factually wrong. Not only in backgammon are there countless cases when it makes sense to ignore something you know. For instance, when a merchant is trying to sell you a very expensive car, it makes perfect sense for you to ignore the social cues given by his likeable behaviour, which suggest a reciprocity that is definitely not called for.

As for the outcome bias, it cannot be only a good thing. Consider the classic story of a trader (or entrepreneur) who has quickly (and luckily) become rich. If he concludes that making money is easy, he is at a high risk of losing everything again with recklessly optimistic actions. Surely that must be maladaptive even in the strictest biological sense.

The bottom line is that neither the die-hard economist nor the die-hard evolutionary psychologist is right (for a more in-depth discussion, see [4]). Actually, the last section will show that a subtler form of outcome bias could be hidden behind the tendency of scientific minds confronted to such questions to gravitate towards one of the two extreme solutions, even when the correct answer lies somewhere inbetween...

3. Beyond the outcome bias with more backgammon

The conclusion of the previous section might be a bit disappointing, since we haven't learnt much about how to overcome the outcome bias without becoming outcome-blind. The uplifting message of the present section will be that there exist ways of keeping the best of the two worlds. One of them is called variance reduction and will again be exemplified by backgammon.

Don’t go straight to Wikipedia and be put off by the barbaric formulae of variance reduction for statisticians. The root concept is actually rather simple. The idea is to start with the actual outcome, but to mitigate it with every factor that we can identify as pure luck2. When applied correctly, that idea has the virtue of shortening “the long run”, and can give reliable results from a much smaller sample of trials.

In backgammon, variance reduction has been successfully applied to the computer simulations that are currently the best way of assessing a particular position (starting from the given position, the program plays a ton of games against itself and records the results). According to expert David Montgomery, using variance reduction in such simulations makes each game worth about twenty-five games information-wise [2].

But that stuff is for computer nerds, do you think. Not necessarily! Here is how the idea of variance reduction can be applied “by hand” in a backgammon game by a clever player.

outcome_bias_position_2.jpg
outcome_bias_position_2.jpg (39.61 KiB) Viewed 29604 times

In the above position you have White and quite a nice blockade against Lesserfish’s lonely back checker. If you manage to extend that block to six consecutive points, the checker will be definitively trapped and the game will be almost won. So all you need is to add the point labeled “10” to your blockade, task on which you can work in relative safety.

You offer the doubling cube, and Lesserfish accepts it much to your surprise. By virtue of a computation similar to our first example (but omitting some subtleties), she needed about 25% of winning chances in order to correctly accept your cube; it seems clear to you that she will win far from that often. You then proceed to win the game, which would very much confirm your initial assessment if you were prone to the outcome bias.

But during the course of the game, you have been attentive enough to notice some turns where your opponent had one or more escaping rolls, when you were slightly lucky that she didn’t roll them. For instance, on his first roll Lesserfish could already escape your blockade with a 6-1 or 1-6 (5,5% of chances) after which she would have been a slight favourite to find her way home and win outright (say 3% of won games). The same happened one move later. Then some moves later, she could escape with a 6 (30% of escaping chances, about 20% of immediate wins). That means that in the course of the game, you have been lucky for an amount of about 26%, making 29% if we add 3% of remaining chances for Lesserfish after you have completed your blockade. In the other dice rolls you didn’t notice anything especially lucky or unlucky.

So your quick and dirty variance reduction corrects your result (100% since you won) by 29%, telling that luck set apart you won only “71% of a game” (that sounds absurd, but such is your variance reduced outcome). Now Lesserfish’s decision doesn’t seem so unlikely to be correct after all. And indeed, a computer simulation (with variance reduction, of course!) shows that she indeed had as many as 27% of winning chances, making her right for accepting your doubling cube!

Naturally the above example is contrived and the one-game approximation will generally not be that precise. But in view of the likely future recurrences of such a position, you would be much better off if you intuitively learned to register variance reduced outcomes into your mental database, rather than actual outcomes.

4. The outcome bias strikes back with false dichotomies

For a final twist, let me show in a (hopefully) somewhat original setting how sticky the outcome bias can be when it comes in disguise, even to someone well aware of it, and who should really know better (me). Here is a personal experience with the Swiss health care system, tweaked for simplicity in a way that preserves the cognitive illusion.

Every year, like all Swiss people I have to choose between the three following variants.

Variant 1: No deductible, $1000 premium
Variant 2: $1000 deductible, $400 premium
Variant 3: $2000 deductible, no premium (if only that was true!)

For instance, with variant 2 and $1500 of medical expense, my insurance would cover only the last $500 of those $1500. So my yearly cost would be $1000 plus $400 (the premium, that I have to pay in any case), for a grand total of $1400.

The system looked simple enough for me to approach it the economist’s way. I drew the following graphic in order to express my yearly cost in function of my medical expense, for each of the three variants.

outcome_bias_graphique.JPG
outcome_bias_graphique.JPG (15.86 KiB) Viewed 29604 times


A quick look at the graphic told me that whatever my medical expense, the variant 2 could never be the best choice. For high medical expense, the variant 1 (no deductible) fared better, while for low medical expense, the variant 3 (no premium) was the way to go. For intermediate medical expenses such as $1000, the variant 2 was even the worst of the three. That made clear that only a sucker would choose variant 2. The rational solution was to figure out my expected medical expense, and to choose variant 1 or 3, depending on whether that expected expense was below or over $1000. Elementary, my dear Watson?

In fact, though tempting it was, and though I have since read the exact same advice in two different magazine articles, the above reasoning was dead wrong, as I realized later and as can be shown by introducing an explicit uncertainty into the medical expense.

Suppose that you hate doctors and drugs, but that due to some condition you sometimes get a serious seizure, for which you must be urgently hospitalized. Every year there is approximatively a 50% chance that you get a seizure and have $2000 of medical expense, otherwise you will have zero medical expense. Let us see what you can expect to pay in each variant.

Variant 1, no deductible: $1000 (=premium)
Variant 2, $1000 deductible: $400 (=premium) + 50% (= probability) * $1000 (=expense – deductible) = $900
Variant 3, $2000 deductible: 50% * $2000 (=expense) = $1000

Surprise, in that case the variant "for suckers" is your best bargain!3 Let us unfold my “rational” line of thought again in order to understand where it went wrong. At first, I have circumvented the outcome bias by making sure to consider all possibilities of medical expense. But after having drawn the graphic, I did as if I could know my medical expense after all. In doing so, I managed to commit the outcome fallacy on a hypothetical outcome!

The above example may seem artificial because it arises from a set of arbitrary rules. But the exact same situation can happen when considering yet to be validated scientific theories. Suppose that you have three competing scientific theories A, B and C that await a common experimental test, which unfortunately requires a more advanced technology than we presently have. If the result of the test comes “true”, then A is more likely than B, which is more likely than C. If the result of the test comes “false”, then C is more likely than B, which is more likely than A.

When thinking about which theory might be the best one, many scientists may gravitate towards one of the extremes A and C and consider B as an inferior theory, even though the considered experiment is not done. But as our insurance policy computation has shown, it is a very real possibility than in the state of our present knowledge, B is indeed the most likely theory!

Summary

  • The outcome bias is adaptive in many situations, but very maladaptive in other, real-world situations. It can and should be overridden in those latter cases.
  • A well-balanced escape is to mitigate outcomes with the help of variance reduction.
  • Nevertheless, it is unlikely that any amount of cognitive weaponry can make a deeply ingrained bias like the outcome bias disappear completely. Cognitive biases that we are aware of can still creep in when the settings are more concealed.
  • Don’t disparage “middling” solutions! A choice that never yields the best possible outcome in any case can very well be the best choice overall.
Notes

1 In our example one could rightly point out another possible bias kicking in: the self-serving bias, which states that our own decisions seem superior to us just by virtue of being ours. But players like our Bigfish are also known for beating themselves real hard after they have correctly taken a big risk and consecutively been crushed because the risk didn’t pay off. In that latter case, the outcome bias clearly supersedes the self-serving bias.
2 Of course, the identification of such factors is more easily said than done.
3 The sad news is that if you correctly solve the insurance problem, you will hardly be able to brag about it. Every year, the outcome will be suboptimal. If you haven’t had any seizure then you would have been better off with the $2000 deductible ; if you have had one, then you “should” have opted for no deductible, will your spouse certainly say.

References

[1] Gino, F., Moore, D.A., Bazermann, M.H., No harm, no foul: The outcome bias in ethical judgments, Harvard Business School, 2008
[2] Montgomery, David, Variance Reduction, http://www.bkgm.com/articles/GOL/Feb00/var.htm
[3] Pinker, Steven, How the Mind Works, Norton, 1997
[4] Stanovich, Keith, The Robot's Rebellion, University of Chicago Press, 2004
[5] Wikipedia, The law of large numbers, http://en.wikipedia.org/wiki/Law_of_large_numbers
[6] Wikipedia, Outcome bias, http://en.wikipedia.org/wiki/Outcome_bias
palindnilap
RS Donator
 
Posts: 509
Age: 47
Male

Switzerland (ch)
Print view this post

Re: 2nd Monthly Science writing Competition - Submissions

#13  Postby Mazille » Dec 24, 2010 3:45 pm


!
MODNOTE
Thanks to you folks for your contributions. :cheers:

Submissions closed.
- Pam.
- Yes?
- Get off the Pope.
User avatar
Mazille
RS Donator
THREAD STARTER
 
Posts: 19271
Age: 31
Male

Austria (at)
Print view this post

3rd Monthly Science Writing Competition - Submissions

#14  Postby Mazille » Feb 15, 2011 9:39 am

Whip out the textbooks, start up your favourite search engine and get going! The new round is upon us and we want you to type our fingers bloody

This month's topic is: "What area of scientific research do you think will prove to be the most important by the end of this century, and why?"

Take a look at the cutting edge of scientific research and tell us which field looks the most promising to you. AI research, personalized, genome-based medicine, or the neuro-sciences? Give us your speculations, following current facts and rational reasoning.

The competition starts as of now. Participants can post their entries in this "Submissions" thread, while everyone else is cordially invited to comment on the relative merit of the entries in the "Discussion" thread here.


!
MODNOTE
Attention! The deadline has been extended!

Submissions can be entered until Tuesday, 29th of March which is when the voting will start. Voting will end on Tuesday, 5th of April.

Articles have to meet all the criteria laid down in the rules here:

The Monthly RatSkep Science Writing Award

We have a lot of professional scientists and very well-versed laymen on the forum and so we decided to make use of those formidable intellectual resources. We challenge you to write an article about a specific topic - which will be revealed later on - and enter it into a competition for "The Monthly RatSkep Science Writing Award"!

Now, to give you an idea of how this competition is going to work:
Every month we will give you the opportunity to take part in this competition. The goal is to write the best article covering a scientific topic of your choice - although with certain restraints. For each round of the competition we will set a general topic (e.g. "Our Solar System", or "The Subatomic World"), from which you can choose any field of interest to write about. After we have announced the general topic of a new round, competitors will have three weeks time to write their articles and enter them in the competition (see below for formal criteria) and after those three weeks users will have another week to vote for the best scientific article.

How is entering the competition and voting going to work?

1. We will have one thread where people can post their articles and enter them in the competition. This thread will be moved from public view after each round of the competition and a new one will be opened for the next round. Only competitors may post there, and the articles will have to be approved by the staff, just like in the Formal Discussion forum.
2. We will have another thread, where users can argue about the merits of each article that entered the competition and where they will be able to vote for their favourite article in the last week of the round via a poll. Every member will have the ability to cast three votes.
3. There will be a third thread, where we collect all the articles that ever entered the competition. This way you will have access to a whole thread full of scientific goodness.
4. After the general topic of a round has been announced, participants will be given three weeks to write and submit their works. Within this time the commentary-thread will be open for relevant discussions, but voting will still be disabled. After these three weeks users will be given one week to submit their votes. At the end of this period the winners will be announced and the submitted articles will enter the "Hall of Fame"-thread. Shortly after that a new round of the competition with an entirely different general topic will start.
5. Each participant may only enter one article per round into the competition.



What are the formal criteria for the articles?

1. Every article you enter into the competition has to be your own original work. Here on RatSkep we do not look kindly on plagiarism. It is, however, allowed to enter the competition with an article you have already posted earlier here on RatSkep, provided it meets the rest of the following criteria.
2. Articles have to be at least 500 words long and mustn't exceed a limit of 3000 words. The maximum number of pictures and graphics is one picture per 500 words of text.
3. Articles have to include an index of their sources, if you used any, and direct quotes have to be credited to the original author. We don't want to impose a specific quotation system on the competitors, but keep it clear, easily readable and stick to one system per article.
4. Articles must cover either the general topic, or an appropriate sub-topic to ensure comparability of your efforts.
5. Of course, the articles will have to be in the limits of the FUA, as usual.



Any article that does not meet all of the above criteria will be disqualified and cannot be entered in the competition.

Why should I enter the competition?

First of all, this is the perfect opportunity to boast with your superior knowledge. Furthermore the winners will get some shiny stuff:

1. The authors of the Top 3 articles will get a nice banner for their signatures, in gold, silver and bronze respectively. The authors of those three articles can keep these banners as long as they want since there are going to be new banners for each new round of the competition.
2. The best entries will also be featured in a prominent spot on our shiny new front-page as soon as LIFE manages to get it up and running.
3. And last, but not least, we might have a few surprises in store for you...




Good luck and have fun! May the best articles win. :cheers: We are looking forward to your contributions.
- Pam.
- Yes?
- Get off the Pope.
User avatar
Mazille
RS Donator
THREAD STARTER
 
Posts: 19271
Age: 31
Male

Austria (at)
Print view this post

Re: 3rd Monthly Science Writing Competition - Submissions

#15  Postby Primate » Mar 29, 2011 7:44 pm

The Power of Perspective

The pace of science is astounding. Today we have jumbo jets, iPhones, flat-screen TVs, and myriad other technological doodads hailed as the latest-and-greatest of the modern world, yet people are alive today who lived before man had learned to fly. Who in this not-so-distant past could have foreseen the truly awesome technology of today, and that becomes old news and outdated practically the same year of their release? A century may seem like a long time, and one only speaks of events such as the sinking of the Titanic or man’s first venture of flight as belonging to a bygone era when the world was black and white, and people were just learning to dance. An enlightening jolt it is when one takes in the fact that there are people alive today who witnessed the turn of the 19th century. Could these supercentenarians have ever predicted, even in their most rational reverie, the state of the world in which they would eventually witness in their own biological antiquity? And could they have pondered, in a brief snapshot of their youth, which vein of science would prove to be the most fruitful or important before they take their final bow and the curtains close?

Image

Importance. How does one even begin to address or contextualize the backbone of what is important? The meaning of this term is above all a personal one, and is therefore difficult to pin down without sufficient introspection. One may identify importance on many levels, ranging from the truly personal, to global, and everywhere in between. Another dimension to consider when contemplating importance is time: What is important now versus what will be important in the future? The constraints of the essay topic, however, confine the temporal dimension to that of a single century. And since a century is very roughly one human lifetime, and societal change rarely occurs but over successive generations, what may be societally and globally important may only come to fruition when one considers timescales exceeding that of one-hundred years. How, then, can one care intimately about what is important if they will not live to see the fruits of their labor?

The only solution to this problem is to find something that should be immensely important to the individual, which one can reap the benefits of in their own lifetime, and that will have positive corollaries that pervade on up through society as a whole. A philosophical enlightenment brought about by hard science, then, seems a prime candidate, for nothing (I feel) is more important than an enlightened mind, and the added benefit of an enlightened society, or world, should not need explicit delineation. The scientific field most capable of bringing about this personal and global enlightenment is human population genetics and phylogeography, and more importantly, how it deals with the question of human origins and evolution. Evolution has been called a “universal acid” because of the way it figuratively eating through ones most cherished beliefs and because it seems to permeate through all veins of life and worldly perspective. Human population genetics has the potential to do away with many outdated and dangerous ways of thinking such as racism, sexism, and religion, to name but a few.

Since it is my goal to not only persuade, but to educate, I would like to elucidate and walk through some of the science I feel is of the type needed to enlighten minds and potentially change the world. Our discussion will deal with the science of human origins, and more specifically a comparison of the Out of Africa, and Multiregional hypotheses pertaining to these origins.

Phylogeography, the branch of science that studies the geographic distribution of gene lineages, and therefore the past migratory patterns of species, is grounded on the sturdy foundation of population genetics. The methods of this field have only somewhat recently been applied to our own species, Homo sapiens, and have had some success at settling a debate about human origins that have long plagued the minds of anthropologists and evolutionary biologists alike. What has been known for some time via the fossil record is that Homo erectus, a hominid thought to be a precursor to modern H. sapiens, was extant in Africa about one million years ago (1 Mya), and that what are referred to as “archaic” Homo sapiens (such as H. rhodesiensis, and H. neanderthalensis) came on the scene approximately 300,000 years ago (300 Kya). It was also known that these forms of archaic H. sapiens emerged from Africa and thence became widely distributed throughout Asia and Europe. Now, the question is, did these archaic forms of H. sapiens evolve in situ into modern H. sapiens (Homo sapiens sapiens), or was there another migratory wave that came out of Africa that competitively excluded the archaic forms into extinction? Each hypothesis makes its own set of predictions about what one should find at the genetic level, and those predictions will now be discussed.

The first hypothesis, that archaic forms evolved in situ into modern humans is known as the Multiregional Hypothesis. This model holds that gene flow, though somewhat restricted, is responsible for the spread of modern traits. Furthermore, if modern humans evolved from their respective archaic forms, and their distribution was so vast as to include Africa and Eurasia, then one would expect genetic drift to be significantly limited, which would in turn suggest that substantial genetic differences would have been able to accumulate between populations. The Multiregional Hypothesis predicts that differences found at the genetic level should, in principle, be traceable to differences that evolved between populations of H. erectus, and archaic H. sapiens in Africa, dating back nearly 1 Mya. Also, if I may restate, this model predicts substantial genetic differences between human populations, and also that, since modern man had evolved from archaic forms, modern man should also carry a large subset of archaic DNA. How do the predictions of the Out of Africa Hypothesis (also called the Replacement Hypothesis) differ?

The Out of Africa Hypothesis holds that, while archaic forms spread out of Africa to colonize Eurasia, modern sapiens evolved in Africa, and this modern form then embarked on a second excursion from the motherland into Eurasia, thereby driving archaic sapiens extinct by competitive exclusion without interbreeding. That is, this modern form was a biological species. The predictions of this model are drastically different than that of the Multiregional one in that, for one, we should not expect to find any, or at least very little, Neanderthal DNA in our modern genome. Also, if Africa was the incubator for modern man, we would expect to see the most genetic diversity between African populations, and less and less diversity in populations much farther removed from Africa. The reason for this is that each new population founded away from Africa must have consisted of but a small subset of the population as a whole, in other words, the founding of a new population represents a genetic bottleneck, thereby reducing the genetic variation with every subsequent colonization farther and farther away from Africa.

Image

So, which model is consistent with the facts? In order to make the data more clear, let me first explain a common way one can measure genetic differentiation between populations. Scientists can use a value known as FST, which is equal to the proportion of variance due to differences between populations rather than within populations, which can be depicted mathematically as FST = 1 – (Hw / Hb), where Hw equals the heterozygocity between chromosomes of individuals within the same population, and Hb being equal to the heterozygocity between chromosomes of individuals from different populations. It is not overly important that you firmly grasp this concept mathematically, and it is quite sufficient to know solely that low FST values indicate that differences are found among individuals from the same population rather than in one population but not the other.

Granted, the differentiation in allele frequencies among humans is very low, yet, if the Out of Africa Hypothesis is correct, we should nevertheless expect our values of FST to increase rather smoothly with physical distance between populations. This trend, as predicted, is exactly what we find.

Image

Given the above data, we should then predict, if Africa (more precisely East Africa) was the birthplace of modern H. sapiens, a decrease in genetic diversity as populations are farther and farther removed from Africa. This, too, is exactly what the data indicates.

Image

As you may have guessed, the Out of Africa Hypothesis is currently favored over the Multiregional Hypothesis, and for good reasons. However, there are some data that indicate that some nuclear genes have lineages nearly 2 million years old and seem to have spread not from Africa, but from Asia to the rest of the world including Africa. Odd bit of data indeed, but, despite the few bumps in the road, they do not seem to be deep enough to topple the Out of Africa Hypothesis from its current favored position. The next hundred years, however, will surely illuminate the big picture of Homo sapiens and our phylogeographic patterns.

All of this talk of human origins, genetic diversity, and FST values may all be well and good, but why should anyone care about it? Sure, this kind of knowledge won’t equip you with tools necessary for everyday survival, that is, unless you make a living teaching this sort of stuff, but the value and importance of ideas are not, or at least they should not be, measured solely by whether or not they are conducive to the immediate acquisition of money or material goods. Rather, they should also be measured by the way they enlighten the mind and sow the seeds necessary for further enlightenment. These types of ideas are of tremendous importance above and beyond that of mere gratifying indulgence. Above all, this kind of enlightenment will not be the sort of pseudoenlightenment one gains from religious creeds, since this knowledge is firmly grounded in science.

Pseudoenlightenment, as I have called it, may indeed have benefits to an individual and all those part of the in-group, but ideas—memes—have a tendency to spread like brushfire, and if these ideas are not grounded in evidentiary support, prejudice is bound to run rampant, and society as a whole suffers. In fact, and in large part, racism, homophobia, sexism, and anti-choice movements are the products of a pseudoenlightenment, and the fact that they stem from belief systems based on faith makes them impenetrable to reason. If bad, unfounded ideas can spread so fast (take Mormonism and Scientology for example), shouldn’t well-grounded, good ideas spread all the more quickly? And is there any aspect of society that wouldn’t benefit from a firm grasp of our evolutionary history as a species? Shouldn’t racism wilt with the knowledge that we all originated from a common motherland? And shouldn’t nearly all other aspects of ignorant societal bias equally disintegrate in the wake of a heightened understanding of evolutionary explanations for behavior and a firm grasp of biological processes, unobstructed by the lens of preconceived and miasmic notions of a soul?

Human origin is but a small tile in the mosaic of scientific inquiry, yet each tile is a unique sparkling gem of insight, a tiny aspect of the masterpiece that is reality. If the trend of scientific advance in the past is any indication of what we are to expect in the future, such as the 66 years it took from humankind's first flight in Kitty Hawk to us putting a human on the moon, imagine what the science of phylogeography and population genetics, two areas of research that have only recently taken flight, have in store for us by the end of the century. I feign no guess as to where this branch of science we will be in 100 years time, but I do know that, given the track record of science, our past will be illuminated and we will be in a position of greater enlightenment. Furthermore, if the ideas become viral, the future of humankind will be an environment where faith is no longer a virtue, and understanding reigns supreme.


NOTE: The works cited will be posted as an edit later in the evening. I wanted to make sure the essay itself got up in time. My apologies.

WORKS CITED:

Avise, J. 2000. Phylogeography. Harvard UP, Cambridge, MA. [6]

Coop, Graham. 2011. Introduction to evolution (EVE 100). University of California, Davis. Winter quarter PowerPoint lecture.

Finlayson, C. 2005. Biogeography and evolution of the genus Homo. Trends Ecol. Evol. 20:457-463. [6]

Futuyma, Douglas. 1998. Evolutionary Biology 3rd Edition. Sunderland , MA: Sinauer Associates Inc.
---, Evolution. 2009. Sunderland , MA: Sinauer Associates Inc.

Garrigan, D., and M. F. Hammer. 2006. Reconstructing human origins in the genomic era. Nature Rev. Genet. 7:669-680. [6]

Green, R E., et al. 2006. Analysis of one million base pairs of Neanderthal DNA. Nature 444:330-336. [6, 20]

Horai, s., et al. 1995. Recent African origin of humans revealed by complete sequences of hominid mitochondrial DMAs. Proc. Natl. Acad. Sci. USA 92:532-536. [26]

Ruvolo, M., et al. 1993. Mitochondrial COII sequences and modern human origins. Mol. Biol. Evol. 10:1115-1135. [26]

Takahata, N. 1995. A genetic perspective on the origin and history of humans. Annu. Rev. Ecol. Syst. 26:343-372. [26]

Templeton, A. R. 2007. Genetics and recent human evolution. Evolution 61: 1507-1519. [6]

Vigilant, L et al. 1991. African populations and the evolution of human mitochondrial DNA. Science 253: 1503-1507. [26]

Wolpoff, N. 1989. Multiregional Evolution: The fossil alternativeto Eden. In P. Mellars and C. Stringer (eds), The Human Revolution, pp. 62-108. Princeton UP, Princeton, NJ.
Primate
 
Posts: 55

Print view this post

Ads by Google


Re: 3rd Monthly Science Writing Competition - Submissions

#16  Postby hackenslash » Apr 04, 2011 10:38 pm

Wow, Man! That's Really Heavy
The Higgs Particle And Its Possible Implications


I usually tend to stay away from idle speculations, but this topic looks like it might be fun, so I want to focus on what could possibly be one of the areas of greatest progress in science in the next century, along with what could be some of its implications.

First, as always, a little background.

Image


The above image is a picture of the standard model of particle physics. In it, we can see all the particles, grouped into the three families of fermions along with the bosons. The fermions, or matter particles, include the quarks, of which protons and neutrons are composed, and the leptons, comprising the electron, the muon, the tau and the various species of neutrino.

Then we have the bosons, which are the messenger particles of the forces. The bosons consist of the photon, mediator of the electromagnetic force, the gluon, mediator of the strong nuclear force, and the W and Z bosons, mediators of the weak nuclear force. Bosons are divided into two groups, the vector bosons, with spin 1, and the scalar bosons, with spin 0. All the known bosons to date are vector bosons.

Each particle can be described by three parameters, namely mass (given in electron volts (eV); all masses in the table should be divided by c2 for their true values, so that the mass of the electron is 0.511MeV/c2), charge and spin. The latter is not to be confused with angular spin in the classical sense, as it is more like what they look like from different angles. For example, a spin 1 particle requires a full revolution so that its appearance is the same from the same vantage point, a spin 2 particle requires only half a revolution, and so on. Hawking presents the following analogy in A Brief History of Time:

Image


We can think of the ace as a spin 1 particle, in that it requires 1 complete revolution before it looks the same from a given perspective. The queen, however, only requires half a revolution and it looks the same. When we get to particles of spin ½, we get the extremely counter-intuitive notion that a particle must go through 2 full revolutions before it 'looks' the same [1].

There are, however, some things missing from the above table. We have no messenger particle for gravity, and no explanation for mass! But how can this be? The standard model of particle physics is supposed to be an attempt to explain everything.

Well, there are several proposed solutions, some of them arising out of other areas, such as the prediction of the graviton arising from Quantum Field Theory. The graviton would be the gravitational boson, or mediator of the gravitational force. Postulated to be massless, due the unlimited range of the gravitational field, and spin 2, due to the way it must interact with the stress-energy tensor, a quantity that describes the flux and density of energy and momentum in spacetime.

However, I want to focus on the standard model's own postulated resolution to this problem, namely the Higgs boson and the Higgs mechanism.

The Higgs boson is postulated to be the first example of a scalar boson, which means that it has spin 0. It is also postulated to be a massive particle, like the W and Z particles that mediate the weak nuclear force. Its mass is not specified, but is postulated to fall within a range of between 100 and 1,000 times the mass of a proton (the proton mass being 0.938GeV/c2; masses of particles are often discussed in terms of multiples or fractions of the proton mass) [2].

So, just what is mass, and how does it work under this proposal? In its most basic treatment, mass can be defined as 'resistance to acceleration'. This definition doesn't hold in all cases without qualification, not least because acceleration due to gravity is actually a function of mass, but it will suffice for our purposes here. We should also properly define acceleration as 'change in velocity'. This includes speeding up, slowing down and changing direction, all of which come under the umbrella of velocity, a vector quantity.

So how does the Higgs mechanism work? This question was famously put by William Waldegrave, Conservative Science Minister under John Major (the boy who ran away from the circus to be an accountant), who was concerned about research being conducted at taxpayers' expense when nobody understood what it was about. He launched a challenge to particle physicists in the UK to explain, in laymen's terms, what the Higgs boson was, and how the Higgs mechanism worked. One of the responses given is one of the most often-quoted analogies in science, ranking with the 'light-clock and train' analogy for Special Relativity, given by David Miller, professor of physics and astronomy at University College, London. I'll reproduce it here in full.

Prof David J. MIller wrote:1. The Higgs Mechanism
Imagine a cocktail party of political party workers who are uniformly distributed across the floor, all talking to their nearest neighbours. The ex-Prime- Minister enters and crosses the room. All of the workers in her neighbourhood are strongly attracted to her and cluster round her. As she moves she attracts the people she comes close to, while the ones she has left return to their even spacing. Because of the knot of people always clustered around her she acquires a greater mass than normal, that is, she has more momentum for the same speed of movement across the room. Once moving she is harder to stop, and once stopped she is harder to get moving again because the clustering process has to be restarted. In three dimensions, and with the complications of relativity, this is the Higgs mechanism. In order to give particles mass, a background field is invented which becomes locally distorted whenever a particle moves through it. The distortion - the clustering of the field around the particle - generates the particle's mass. The idea comes directly from the Physics of Solids. Instead of a field spread throughout all space a solid contains a lattice of positively charged crystal atoms. When an electron moves through the lattice the atoms are attracted to it, causing the electron's effective mass to be as much as 40 times bigger than the mass of a free electron. The postulated Higgs field in the vacuum is a sort of hypothetical lattice which fills our Universe. We need it because otherwise we cannot explain why the Z and W particles which carry the Weak Interactions are so heavy while the photon which carries Electromagnetic forces is massless.

2. The Higgs Boson.
Now consider a rumour passing through our room full of uniformly spread political workers. Those near the door hear of it first and cluster together to get the details, then they turn and move closer to their next neighbours who want to know about it too. A wave of clustering passes through the room. It may spread out to all the corners, or it may form a compact bunch which carries the news along a line of workers from the door to some dignitary at the other side of the room. Since the information is carried by clusters of people, and since it was clustering which gave extra mass to the ex-Prime Minister, then the rumour-carrying clusters also have mass. The Higgs boson is predicted to be just such a clustering in the Higgs field. We will find it much easier to believe that the field exists, and that the mechanism for giving other particles mass is true, if we actually see the Higgs particle itself. Again, there are analogies in the Physics of Solids. A crystal lattice can carry waves of clustering without needing an electron to move and attract the atoms. These waves can behave as if they are particles. They are called phonons, and they too are bosons. There could be a Higgs mechanism, and a Higgs field throughout our Universe, without there being a Higgs boson. The next generation of colliders will sort this out.


The last line is a teaser for what's happening now. The primary function of the construction of the Large Hadron Collider in Geneva is an attempt to isolate this elusive particle. Given its mass range, and the relationship between the mass of a particle and the energy required to detect it, the lower end of the postulated mass range at around 93.8 GeV/c2is within reach of the newly upgraded Tevatron at Fermilab [3]. If it falls within any of the rest of the postulated range up to about 9.38 TeV/c2 (being 1,000 times the proton mass, in case this figure seems arbitrary), then it should be well within the energy range of the LHC to detect it when it gets up to full power doing physics at 14 TeV. Indeed, the LHC has a little headroom in this regard, to the tune of 5 TeV or so!


Righty, then! Enough of the hard science, and on to some wild speculation!

What does all of this mean in terms of impact on humanity? Well, we can look to history for some ideas in this regard. Let's take a look at two areas in which our understanding of how fields and their constituent particles have impacted us thus far:

The first to look at is the work of Faraday and then Maxwell (among others) on the unification of electricity and magnetism. Their work led to some pretty stunning inventions that today we take for granted, although none of this was foreseen when they were conducting their experiments.

It began with Faraday conducting experiments with the newly discovered electricity, in the time honoured tradition of mucking about with it and seeing what happened. He noticed that when he ran pulses of electric current through a wire, the needle of a compass jumped about in time with the pulses. He also noticed that an electric current flowed through a coil of wire when a magnet was pushed through it [4]. Among my friends in music, I often cite this as the defining contribution to modern music, because it is this relationship elucidated by Faraday's experiments that led to the musical revolution of the 20th century, in the form of the electromagnetic guitar pickup, without which we would never have heard of Les Paul or Leo Fender, and Stijndeloose's treasured Shure SM58s would be nothing more than a pipe dream. In the words of 'that smiley faced fuck knuckle' (according to Campermon's missus):

Brian Cox wrote:These two simple phenomena, which now go by the name of electromagnetic induction, are the basis for generating electricity in all of the world’s power stations and all of the electric motors we use every day, from the pump in your fridge to the “eject” mechanism in your DVD player. Faraday’s contribution to the growth of the industrial world is incalculable. [5]


Maxwell's contribution was, of course, to formalise these ideas mathematically, and to give rigour to the concept of 'fields'. His field equations are one of the foundations of modern physics. Einstein said of Maxwell's work:

The Wild Haired Brainy One wrote:The special theory of relativity owes its origins to Maxwell's equations of the electromagnetic field.


And on the centenary of Maxwell's birth:

Since Maxwell's time, physical reality has been thought of as represented by continuous fields, and not capable of any mechanical interpretation. This change in the conception of reality is the most profound and the most fruitful that physics has experienced since the time of Newton [6]


A minor but interesting digression at this point is apposite, because it demonstrates the regard Einstein had for Maxwell's work in the area of fields, and how it related to his own work. When, in 1919, an unknown German mathematician named Theodor Kaluza began studying Special Relativity, he noticed something odd. When he solved Einstein's General Relativity equations for five dimensions, Maxwell's equations fell out in the solution! He wrote to Einstein, and Einstein encouraged him to publish, which he duly did in 1921 [7]. This was the first time that extra dimensions had been mooted, an idea that wasn't seriously discussed again until the advent of string theory in the late 1960s (with the exception of Klein's work in 1926).

Anyhoo, all this talk of Einstein brings me to my next example, namely Einstein's work on the photoelectric effect.

First elucidated by Edmond Becquerel in observations of the effects of light on electrolytic cells in 1839 (a related effect, known as the photovoltaic effect), it was demonstrated that there was a strong relationship between light and the electronic properties of materials [8]. This work was further built on by Smith (1873), and others, until the first observation of the photoelectric effect by Heinrich Hertz in 1887 [9]. Many contributed to the field until, in 1905, Einstein published a landmark paper describing how the photoelectric effect was caused by the absorption of photons [10] (he called them light quanta), an idea first put forward by Max Planck in 1901 in a paper describing his law of black-body radiation [11]. It was for this work that Einstein was awarded the Nobel prize in 1921.

From this work stemmed such direct uses as photoelectric cells, as employed in solar panels, photoelectron spectroscopy, image sensors, night vision binoculars and the gold-leaf electroscope. Indirectly, it led to the formulation of quantum mechanics, the applications of which we've probably only just begun to touch on, not least because it has radically altered our conception of how reality operates, but the most obvious direct application of quantum-mechanical principles is in computing, in that the operation of microchips relies on a phenomenon known as electron tunnelling, a quantum mechanical effect. The invention of this, by Leo Esaki in 1957, working for the company that eventually became Sony, won him a shared Nobel Prize for physics in 1973.


So, back to the Higgs, and what its discovery might mean:

If we take into account the technologies above growing out of our understanding and manipulation of particles in other fields, we can begin to see what might arise from being able to understand the Higgs and how it operates. The Higgs is postulated to give rise to mass in other particles. So what might we be able to do if we can understand the mechanics of mass?

Well, the first and most obvious thing to speculate about is anti-grav technology. If we can learn to manipulate the mass of objects, and since weight is simply the effect of gravity on mass, we can see that one of the possible applications of understanding of the Higgs mechanism is manipulation of the Higgs field to the degree that we can literally defy gravity. This will mean safer air travel, because we won't have to do all that mucking about with aerodynamics and the employment of highly combustible fuels in order to get off the ground. It will also mean quicker air travel for several reasons, one of which I will come to, but one of which should be immediately obvious, in that anti-gravity technology will give us the ability to get much higher in the atmosphere where atmospheric friction is greatly reduced, because we won't have to worry about the requirement for sufficient atmospheric density to gain lift.

It also has some interesting corrollary effects, not least in applications that deal with energy consumption in achieving escape velocity from Earth's gravity well. Extrapolating this even further, and thinking about the relativistic implications of mass in space travel, if we can manipulate the Higgs field, it may be that we can achieve velocities through space at significant fractions of c, or even to achieve travel AT c. The biggest obstacle to achieving light-speed travel is simply mass. As a body with mass approaches light-speed, its energy increases to such a degree that mass becomes almost infinite at very close to light-speed. this, of course, means that an infinite amount of energy is required to accelerate a massive body the rest of the way to c. The ability to manipulate the interaction of a massive body with the Higgs field may mean that we can, to all intents and purposes, make massive bodies massless. Since all bodies without mass necessarily travel at c, time stops, and travel becomes possible to the far reaches of the cosmos, although it should be noted, for all those alien visitation enthusiasts, that we wouldn't be returning to Earth within the lifetime of the human race, because relativistic time dilation effects relate to inertial frames, and although we could, from the perspective of the traveller, reach any point in the cosmos instantaneously, the amount of time that would pass on Earth would be a different matter. It does, however, open the door to colonisation in a reasonable time frame.

It would also allow us to do some other pretty interesting things. One of the things that's been talked about for a long time has been a base on the moon. How, you might ask, does this relate? Well, one of the problems faced by any engineer working on the moon is simply operating in a reduced gravity field, and fabrication of modules is among those problems. If we could reasonably manipulate mass, we could build entire buldings on Earth and transport them to the moon, because we could work under normal conditions, then make the buildings essentially weightless, and transport them to the moon for minimal fuel expenditure. Fuel expenditure and its relationship to mass is the biggest single barrier to our moving out into space. Indeed, most of the consideration of any space mission is weight.

There is a maximum velocity that can be achieved by any rocket. Konstantin Tsiolkovsky, in 1903, derived an equation that deals with this.

Image


Where m0 is the inital mass, including propellant, m1 is the final mass, ve is the effect exhaust velocity and Δv is the maximum change in velocity, excluding external influences [12].

Given the importance of mass in this equation, it is clear that, even given the use of conventional propellants, the ability to manipulate mass radically affects these relationships. Given that even the fuel could be rendered effectively massless (assuming that our understanding of the HIggs field doesn't render this pont moot), this means that we could reasonably carry ridiculous amounts of fuel, and use that fuel more efficiently.

I hope that the above gives some pause to those who suggest that the LHC and similar experiments are a waste of money.

1. A Brief History Of Time - Hawking (1988)
2. Fabric of the Cosmos - Greene (2004)
3. Higgs boson decays to CP-odd scalars at the Fermilab Tevatron and beyond - Dobrescu et al (2001)
4. Experimental researches in electricty - M Farady - Proceedings of the Royal Society (1854)
5. Why Does E=mc2 - Cox and Forshaw (2009)
6. Clerk Maxwell Foundation
7. Zum Unitätsproblem der Physik - T Kaluza - Sitzungsberichte Preussische Akademie der Wissenschaften 96, 69. (1921)
8. Milestones of Solar Conversion and Photovoltaics - V. Petrova-Koch (2009)
9. Hertz - Annalen der Physik (1887)
10 On a Heuristic Viewpoint Concerning the Production and Transformation of Light - A EInstein - Annalen der Physik (1905)
11 On the Law of Distribution of Energy in the Normal Spectrum - M Planck - Annalen der Physik (1901)
12 One hundred and fifty years of a dreamer and fifty years of realization of his dream: Konstantin Eduardovitch Tsiolkovsky and the Sputnik 1 - Bhupati Chakrabarti (2007)
User avatar
hackenslash
 
Name: The Other Sweary One
Posts: 21158
Age: 48
Male

Country: Republic of Mancunia
Print view this post

Re: 3rd Monthly Science Writing Competition - Submissions

#17  Postby Mr.Samsa » Apr 10, 2011 7:19 am

RADICAL BEHAVIOURISM AND SOCIETY


B. F. Skinner argued that the science of psychology is the study of behaviour and not the psyche or mind, and in doing so he challenged the lay understanding of human nature by asserting that hypothetical concepts like the mind cannot be used as explanations for behaviour as they themselves are behaviours that require causal explanations. So radical behaviourism is the philosophical approach that underpins the science of behaviour (Skinner, 1974) and it is an extension of methodological behaviourism which aimed to turn psychology into a natural science.

As this metascientific position claims that behaviour is caused by its relation to the environment through a method of selection by consequences, then the question of free will and determinism is necessarily raised. That is, if a science of human nature were possible and that science were able to predict and control our choices, then how truly free can our will be? The three main variations of determinism discussed in this essay are hard determinism, soft determinism and libertarianism – with free will being compatible only with the latter two categories. The importance of examining the implications that radical behaviourism has on free will is not merely an academic exercise, rather the question of its existence significantly impacts the way we should develop and control how society is run.

Radical Behaviourism


The notion that psychology should rely on objective measures, as opposed to the subjective approach of introspection, was outlined in John B. Watson’s “Psychology as the behaviorist views it” (Watson, 1913). It was here that he extended upon Darwin’s theory of evolution and argued that since there is the continuity of species then research from humans should be able to be applied to animals and vice versa, thus creating a general science of behaviour instead of the narrower anthropocentric model of pre-Watson psychology. He hoped that by rejecting hypothetical constructs such as references to the mind or consciousness (and avoiding anthropomorphism) then he could pioneer a truly objective approach to understanding behaviour.

Mazur (2002) argues that the integral point of contention between Watson’s methodological behaviourism and Skinner’s radical behaviourism is that whilst Watson was critical of using unobservable events as psychological data, Skinner was more concerned with the inappropriate use of unobservable events in psychological theories. In other words, although Skinner accepted the methodological advances made by Watson, he did not feel it was necessary to reject unobservable (“private”) events but simply acknowledged that these events were also behaviours that needed explaining.

These intervening variables, Skinner believed, were not necessary in order to understand behaviour. For example, take the situation where a rat is deprived of water and then placed in an operant chamber where it can work for water, then we would expect to find a relationship between the level of deprivation and the rate of lever pressing. Instead of claiming that the rate of lever pressing was a result of “thirst” that was caused by the hours of water deprivation, Skinner argued that the variable “thirst” adds nothing to our ability to predict the rat’s behaviour because our rule/equation works equally well without it (with the bonus of not needing to appeal to an unobservable event). The problem with hypothetical constructs is that we can easily deceive ourselves into thinking that we have uncovered the causal relation when in fact all we have done is create a plausible description of what preceded the behaviour and halted any inquiry into what the actual cause is – in effect, we have simply devised a “just-so” story (an ad hoc fallacy).

Gilbert Ryle described the use of intervening variables as an explanation for behaviour as a category mistake (Ryle, 1984). There are numerous category mistakes that one can make, such as mistakenly assigning an instance to the wrong category (for example, naming “carrot” as an example of “fruits”) – but the one that is pertinent to radical behaviourism is the use of a category label as an example of a specific instance of that category. Continuing with the example above, this would be like saying “vegetables” or, even worse, “fruits” as an example of “fruits”. “Vegetables” is clearly a category mistake as it is not an example of fruits as well as being a category label in itself, however, “fruits” is an even greater logical error as it is not only a category label but it is the very label of the group we are attempting to name instances of.

In terms of behaviours, this would be like changing the game to naming instances of intelligent behaviour. The game may begin well, with the players suggesting things such as chess playing, mathematical ability, literacy, and other various skills, but then suppose someone suggests “intelligence” as an example of intelligent behaviour. The mistake here is one of mentalism; that is, assuming that there is some underlying quality beneath the intelligent behaviours called “intelligence”, rather than understanding that “intelligence” is a label that describes these behaviours. This means that if someone attempted to understand a person’s chess playing behaviour by claiming it is caused by their intelligence, then this would be a category mistake – it adds nothing to our understanding and so cannot be used to make further predictions, and worst of all, it is circular as the intelligent behaviour is being “explained” by naming the category it belongs to.
In essence, the behaviourist position is that the study of human behaviour should progress in the same way all other sciences have; by moving away from speculation about possible internal states to describing observable events. We now realise that the Aristotelian approach to understanding the acceleration of falling objects being due to the increasing jubilance that the object feels as it nears home is no longer a valid inference and radical behaviourism argues that we should do the same in psychology.

Determinism and Free Will


The idea that there can be a science of behavior implies that behaviour, like any scientific subject matter, is orderly, can be explained, with the right knowledge can be predicted, and with the right means can be controlled. This is determinism, the notion that behavior is determined solely by hereditary and environment. (Baum, 2005, p.12)


The basic definition of determinism is presented by Baum above – if a behaviour can be predicted to the degree that it can be controlled by altering some of the variables, then that behaviour is described as being determined. Free will, in contrast, is the position that despite genetic inheritance and learning histories, a person who behaves one way could still have chosen to behave another way. Sappington (1990) categorises the determinism/free will debate into three positions; hard determinism, soft determinism and libertarianism. Hard determinism, the idea that human behaviour is controlled entirely by external factors outside of the person, is contrasted here with soft determinism, which argues that free will is not necessarily incompatible with deterministic factors as the person will always choose what they ‘want’ even if those desires are ultimately determined by external causes themselves. The soft determinism position avoids the free will issue by redefining the concept of freedom and differs from the traditional libertarianism definition of free will – that is, humans are exempt from natural laws as the choices they make are not determined by external factors and so people are viewed as active agents in the world.

Radical behaviourism is a hard determinist philosophy as Skinner argues that human behaviour is a result of our phylogeny (evolutionary history) and ontogeny (environmental history) and it does not include a variable that can accommodate free will (Skinner, 1966). Although it may seem presumptuous for Skinner to assume that our behaviour is the result of genetic and environmental factors, it is necessary not from a philosophical standpoint but rather from a scientific one. If we can describe and predict human choices without reference to a mind or free will, then such a notion would be extraneous to our understanding and it would be more parsimonious to exclude it until such time evidence arises in favour of free will or until it becomes necessary to include factors other than the external.

The soft determinist position is characterised by Daniel Dennett (1984) who defined free will as the deliberation that occurs before a behaviour. This form of free will is compatible with determinism because the deliberation itself forms part of the causality chain that extends backward in time. This is in conflict with the libertarianism position which claims that free will is separate from any causality and it seems to be a position that is unfalsifiable as any action, whether it is predictable or not, could be argued to have been a result of free will. A more intriguing argument for free will, however, is one that arises from chaos theory which basically states that even though complex physically systems may be composed of identifiably deterministic parts, the system as a whole may be inherently unpredictable in some aspects (Duke, 1994).

This approach essentially frames the concept of free will as ‘unpredictability’, which may superficially satisfy our definition of free will but Baum (2005) suggests that this unpredictability cannot be used as evidence of free will because there are many natural systems that we cannot perfectly predict but this does not mean that the weather, for example, has free will. The logical error, pointed out by Baum, is that although free will implies some degree of unpredictability, the converse is not equally true; that is, unpredictability does not imply free will.

Why is it important to consider the concept of free will when discussing the idea of radical behaviourism in society? The first is that for a science of human behaviour to be possible we need to be able to accept (conditionally, at least) that human behaviour is predictable and to be able to reject claims that human behaviour cannot be understood because of the unpredictable nature of free will. The second consideration is that in order to construct a society that runs efficiently, we must implement laws and practices that are compatible with the causal laws of human behaviour.

Responsibility and Justice


As discussed above, the idea of free will ultimately seems to be founded in a form of ignorance – that is, ignorance of the underlying contingencies. When we learn of a politician who has accepted a bribe, we are no longer confident that his actions are a result his own free will. Equally, if we are to learn that an artist had supportive parents and committed teacher then we are less likely to wonder his creative abilities came from. Skinner (1971) argued that the mentalistic concepts of credit and blame being awarded to certain actions, like the ones presented above, prevent us from solving important societal problems.

Baum (2005) notes that the concept of responsibility generally applies to the cause of events, for example “The bad wiring was responsible for the fire” is equivalent to saying “The bad wiring caused the fire”. So if we were to suggest that a person is responsible for the fire, then we are saying that this person caused the fire. The difference between these two explanations, however, is that we can readily accept the idea that the bad wiring was the result of environmental effects (such as weathering) but when we replace the wiring with a person, the causal chain seems to end there – that is, the person caused the fire because they wanted to. There seems to be a distinction made between objects and people which becomes apparent when we consider the same situation as above, but this time with the person being threatened at gun point to start the fire. We may feel compelled to say that this person no longer has a choice but the only thing that has changed is that the contingencies behind his actions are now obvious. In the original example, we may discover that the person had a traumatic childhood or that he is a pyromaniac – as we discover more and more about he history, the less likely we are to attribute his actions to his free will and the more likely we are to recognise the environmental factors that resulted in the fire.

So the notions of credit and blame are essentially just different ways of talking about causes, yet with the addition of approval or disapproval attached to each, respectively. When we are caught doing something we know that we should not be doing, we try to assign blame to an environmental factor but we tend to shy away from illuminating the environmental factors that resulted in us doing something good in order to have the credit attributed to our self. Baum (2005) suggests that the reason for this disparity is not difficult to uncover – if we assign blame to the environment we can avoid punishment, and if we resist credit being assigned to the environment then we can receive reinforcement in the form of social praise or even monetary rewards. Credit and blame, and ultimately responsibility as a whole, is just a subjective way of dividing up causes of our behaviours.

This concept seems, superficially at least, to be a fairly simple one to grasp so why is it that has not been adopted into societal practices already? Why do we still base our society around the notion of free agents? Staddon (2001) suggests that the reason for this is that if the criminal behaviour of a person is perfectly predictable, then punishment for such actions seems unjust as it is not possible for a person to behave any other way than his hereditary and environment dictates. This concept, however, is not as controversial as it may seem as defence attorneys will frequently suggest ‘mitigating factors’ that resulted in the defendant committing the crime. Staddon presents such a case where two brothers, accused of murdering their father, were defended in court through the use of testimony that suggested a history of child abuse (this trial ended in a hung jury, but in the next trial they were convicted and sentenced to life imprisonment).

The obvious question that needs to be asked here, however, is what do we mean by punishment? If by punishment we mean the moral condemnation of an action, then we might be inclined conclude that it is in fact unreasonable for us to judge a criminal as his actions were determined by genetic and environmental factors. This appears to be traditional usage of the term punishment and the judicial system seems to be based on the concept of retribution. From a behavioural perspective though, this is not what we mean by punishment. Baum (2005) argues that a greater recognition of extenuating circumstances will necessarily lead to a more practical system, one that is concerned with either changing the behaviour or, failing such an option, creating measures that ensure the safety of the public by removing the incorrigible criminal from society. (This will be discussed in more detail in the design of culture section).

Values and Morals


As an extension of responsibility and justice, we have to consider what impact the determinist position of radical behaviourism has on values and morals. Although most religions will argue that our morality and values are inherent to us and come from god, but as an explanation of human behaviour it falls short as it is unfalsifiable and offers no predictive power. Lewis (1960) asserted that science can only describe how we do behave but not how we should behave. C. S. Lewis starts with the assumption that god exists and from there argues that science cannot tell us what god finds good or bad, and this much is true insofar as it is impossible for science to understand the whims and preferences of an entity for which there is no solid evidence for. Science can, however, make judegements based on what people find good or bad by assuming moral relativism. Moral relativism is the antithesis of absolute morality as it contends that values differ across people, places, times and contexts. The standard argument against relativism is that if there is no absolute standard of what is right and wrong, how are we to decide who is correct when definitions of right and wrong conflict?

Baum (2005), among others, argues that social conventions dictate who is correct. Good and bad is decided by the group and this convention is then translated to the individual’s situation. The question then becomes, how does the group decide what is to be considered good and bad, right and wrong? The answer from Skinner (1971) was, in a nutshell, things that are considered good are positive reinforcers and things that are considered bad are punishers. Money is good, ill health is bad – the associations between the behaviours/events with the verbal behaviour are made as a function of the consequences they yield. This approach is not only plausible but it can also account for what Lewis (1960) terms “The law of human nature”. His argument begins by appealing to a situation that is common to all of us; when a conflict arises we tend to hear somebody make claims such as “That isn’t fair, I was here first”, or “How would you like it if someone treated you like that?” and so on. To this he says:

Now what interests me about all these remarks is that the man who makes them is not merely saying that the other man’s behaviour does not happen to please him. He is appealing to some kind of standard of behaviour which he expects the other man to know about. And the other man very seldom replies: “To hell with your standard.” Nearly always he tries to make out that what he has been doing does not really go against the standard, or that if it does there is some special excuse... It looks, in fact very much as if both parties had in mind some kind of Law or Rule of fair play or decent behaviour or morality or whatever you like to call it, about which they really agreed. And they have. (Lewis, 1960, p. 17)


Although Lewis takes this as evidence that there is an absolute morality, Skinner argues that this is simply the product of learning – when a child is playing with another child and shares his toys, then this behaviour is likely to be followed by various forms of reinforcement such as social praise from the parents as well as possible reciprocal reinforcement in the form of a toy being shared in exchange.

Design of culture


If we were to assume that radical behaviourism is the philosophy upon which society should be formed, then what exactly does that mean? Essentially, it is an extension of the scientific movement, the science of human behaviour – the idea that systems such as governments and prisons should be based on evidential claims and should be able to readily change under the light of new findings. The justice and penal system, as mentioned earlier, should not be based on mentalistic notions of retribution and “punishment”, instead it should be based on changing problem behaviours and creating a more functional society. For example, there are a number of experimental studies looking at the effectiveness of punishment that have not been utilised in societal institutions; partially due to the impracticality of some but also due to refusal to accept that human behaviour is predictable and controllable, to some degree at least.

In particular, we need to consider factors such as manner of introduction and immediacy of punishment implemented to assess whether they are the most effective methods we can use. Azrin and Holz (1966) suggest that if the goal is to obtain a large, permanent decrease in a problem behaviour then the punisher must be delivered immediately and at its full intensity to avoid habituation to successive mild punishers. Azrin, Holz, & Hake (1963) demonstrated that an immediate large shock of 80-volts following a pigeon’s keypeck was sufficient to result in a complete suppression of that behaviour, however, if that intensity was not used from the outset and the punishment began at lower intensities that gradually increased, then the pigeons would continue to respond even when the intensity was raised to 130 volts. This is an issue with the current system where infringements are usually punished with warnings or minor fines that increase as the number of infringements increase.

The immediacy of punishment is also a key factor in suppression behaviours. Baron, Kaufman, and Fazzini (1969) found an orderly relationship between the punishment delay and response rate when looking at rats responding on a Sidman avoidance task. The delay between a response and punishment was varied between 0 to 60 seconds and they found that the more immediate the punishment then the greater the decrease in responding. The reason for this is probably due to creating effective contiguity between the response and consequence – the less delayed the punishment, the less likely the consequence will be attributed to an intervening, but irrelevant, response. The implications this holds for things such as speeding fines should be readily apparent; the greater the delay in the act of speeding and receiving the ticket (then having to pay it) then the less effective the speeding ticket is in actually changing behaviour. Instead a much more immediate form of punishment is necessary – although possibly impractical, it may be prudent to install ticket devices in cars that will automatically print out a ticket the instant the speeding camera records an infringement.

Punishment is not the only tool of the behaviour analyst though, and reinforcement is also a powerful manipulator of behaviour. In education settings, for example, a study by Thomas, Presland, Grant and Glynn (1978) suggested that teachers tend to use very low levels of approval (approximately once every five minutes) and about three times as many disapproving statements. These findings lead us to question not only what the reinforcement contingencies are for the children in the classroom, but also what environmental factors are influencing the teachers’ behaviour. The Thomas et al. study argued that the attention given to off-task students acted both as a reinforcement for the child to continue being off-task, but also as a reinforcer for the teacher as their reprimand resulted in momentary on-task behaviour.

Conclusion


By understanding the deterministic nature of human behaviour and accepting that it can be predicted and controlled, we move closer toward building a more functional society – from restructuring the penal system to more effectively suppress problem behaviours, to creating a more productive learning environment that benefits both the student and the teacher. The philosophy of radical behaviourism is not necessarily about pushing operant techniques on society, but rather it is the notion that human behaviour should be understood in the same way we understand the rest of our world and through experimentation and science we can build on our knowledge to improve our way of life.

REFERENCES

Azrin, N. H., & Holz, W. (1966). Punishment. In W. K. Honig, Operant behavior: Areas of research and application. Englewood Cliffs, NJ: Prentice-Hall.

Azrin, N. H., Holz, W., & Hake, D. F. (1963). Fixed-ratio punishment. Journal of the Experimental Analysis of Behavior , 6, 141-148.

Baron, A., Kaufman, A., & Fazzini, D. (1969). Density and delay of punishment of free-operant avoidance. Journal of the Experimental Analysis of Behavior , 12, 1029-1037.

Baum, W. M. (2005). Understanding Behaviorism: Behavior, Culture, and Evolution (2nd ed.). Oxford: Blackwell Publishing.

Duke, M. P. (1994). Chaos theory and psychology: Seven propositions. Genetic, Social, and General Psychology Monographs , 120, 267-286.

Lewis, C. S. (1960). Mere Christianity. New York: Macmillan.

Mazur, J. E. (2002). Learning and Behavior (5th ed.). New Jersey: Prentice Hall.

Ryle, G. (1984). The Concept of Mind. Chicago: University of Chicago Press.

Sappington, A. A. (1990). Recent Psychological Approaches to the Free Will Versus Determinism Issue. Psychological Bulletin , 108, 19-29.

Skinner, B. F. (1974). About Behaviorism. New York: Random House.

Skinner, B. F. (1971). Beyond Freedom and Dignity. Middlesex: Penguin Books Ltd.

Skinner, B. F. (1966, September 9). The Phylogeny and Ontogeny of Behavior. Science , pp. 1205-1213.

Staddon, J. E. (2001). The New Behaviorism: Mind, Mechanism, and Society. Philadelphia: Psychology Press.

Thomas, J., Presland, I., Grant, M., & Glynn, T. (1978). Natural rates of teacher approval and disapproval in grade 7 classrooms. Journal of Applied Behavior Analysis , 11, 91‐94.

Watson, J. B. (1913). Psychology as the behaviorist views it. Psychological Review , 20, 158-177.
Image
Mr.Samsa
 
Posts: 11370
Age: 31

Print view this post

Re: 3rd Monthly Science Writing Competition - Submissions

#18  Postby palindnilap » Apr 10, 2011 8:36 am

(Not yet) understanding complex systems

A complex system is composed of a large set of interactors called agents, whose individual behaviors are not necessarily complex in themselves, but who will interact repeatedly enough for the behavior of the whole system to be hard to predict. Complex systems abound, and here are some examples of them (in brackets, the agents of the system) :

  • A cell's metabolism (biochemicals)
  • The immune system (immune cells)
  • The brain (neurons)
  • An ant colony (ants)
  • An ecosystem (living beings)
  • A language (speakers)
  • The global economy (people)
The questions we ask about such systems are generally not to predict what happens to any specific agent, but rather to predict what happens to some observed properties of the whole system, called emergent properties. Immune responses, emotions, biodiversity or the Wall Street index are examples of some interesting emergent properties of the above mentioned systems. Nevertheless, emergence is an often misunderstood concept, and in order to convince the reader that it is not just a politically correct term for "woo", I will devote the next section to a vivid illustration of it.

1. Emergence in the Game of Life

Conway's Game of Life is a special case of a cellular automaton, specified by a set of rules operating on configurations of the cells of a 2-dimensional grid. Every cell is in one of two states, Off or On, or more vividly, dead or alive. The rules specifying what happens to an individual cell are as follows :

  • If an dead (empty) cell has exactly three living neighbors (diagonal neighbors included), it becomes alive (a birth occurs).
  • If a living cell has strictly less than two living neighbors, it dies (death by isolation).
  • If a living cell has strictly more than three living neighbors, it also dies (death by congestion).
  • Otherwise, the cell stays in the same state it was before.
The "game" is to observe what happens to a given configuration when those rules are applied simultaneously and repeatedly to all the cells in the grid. Here is a first example (The images are borrowed from the Wikipedia Game of Life page, whose link I recommend to follow for pictures and animations) :

Image

As the reader can check, all living cells have three neighbours, and no dead cell has more than two neighbours,
which means that nothing happens and the configuration stays forever. This is our first, trivial emergent pattern, that will be named "block".

Image

That one is already a little more fun. At each turn two cells die and two other cells are born, which produces an alternating pattern. Although it is not as stable as our first pattern, I hope that you have no difficulty in naming this pattern "blinker".

Image

Here is where the fun really begins : this pattern looks like it takes four steps to reappear unaffected, except that it has "moved down". But what has moved down ? Certainly not any cell, they have all stayed in their place, some coming to live and some others dying. Nevertheless, I will be playing the "emergence" card in order to christen that pattern "glider", wherever it appears on the grid.

As you can imagine, some people have taken the game further :
An eater can eat a glider in four generations. Whatever is being consumed, the basic process is the same. A bridge forms between the eater and its prey. In the next generation, the bridge region dies from overpopulation, taking a bit out of both eater and prey. The eater then repairs itself. The glider usually cannot. If the remainder of the prey dies out as with the glider, the prey is consumed. [Poundstone 1985, quoted in Dennett 2003]

Is this just bullshit ? Consider now that by using gliders, eaters, guns, and many other sophisticated patterns, Conway and others have managed to build a gigantic configuration that acts like an universal computer [Chapman 2002]. That is a pretty impressive achievement, that would have been impossible without accepting that, for instance, "gliders" really exist !

2. Features

The bad news are, complex systems do not seem to share a lot of common features. Here are two of them, the first one pretty basic and the second one more sophisticated.

Oscillations

A typical feature of a complex system is an sustained oscillation that is endogeneous, i.e. not explained by external inputs.The Lotka-Volterra predator-prey model is the prime example where the origin of such oscillations is very well understood. When predators are numerous, preys become rare, predators starve, after some time they die off and preys can flourish again in order to fuel predators... and the cycle of life makes another merry-go-round. By analogy, when struggling to explain the same oscillations in, say, the global economy, it makes sense to look for such feedback loops.

Power laws

Image

Look at the above picture. The blue line shows how many earthquakes of each magnitude have happened in the last century. Note that both axis are logarithmic since the Richter index is a logarithmic index (an earthquake of magnitude 6 is 10 times stronger than one of magnitude 5, 100 times than one of magnitude 4, etc.). On those logarithmic axis, you see a rather nice straight line (indeed, maybe just a bit too nice to be completely true).

That would already qualify as a curiosity if it concerned only earthquakes. But replace now earthquakes with people, and magnitude with fortune. You also get about a straight line. Or take English words ranked by their occurrences in written text [Zipf 1935]. Or market days, ranked by variations of the price of cotton [Mandelbrot 1997]. Straight lines again. A distribution like the above is said to follow a power law, and power laws are pretty ubiquitous in complex systems. In fact, they are sometimes said to be a signature of complex systems.

Technical stuff for statisticians, might you think. But in fact there is an essential qualitative difference between a distribution following a power law or the classical Gauss bell curve. In the latter case, exceptional values can be discarded because they are rare enough to affect the mean only infinitesimally. Not so with the power law. As exceptional as your consider values, their relative weight will always be on par with the less exceptional values. That means that the law of large numbers won't hold in a power law distribution.

3. Tools

How are we to tame complex systems ? The obvious empiricist answer would be that since we can't predict the exact outcome of a complex system, we should approach it statistically, by computing means, correlations, variances. But we have just seen why it won't work that way. In the case of complex systems following power laws, what we use to call the statistical noise just doesn't average out. Except in some gentle cases, doing bell-curve statistics might do more harm than good. [Taleb 2007]

Since the mathematics seem to be of little help, what we are left with is computer simulation. The obvious approach is to model the behavior and the interaction of agents and to let the system unfold. That is what is done most of the time, but the problem is that the behavior and interactions of the agents can always be modeled in a lot of different ways, that yield qualitatively very different results.

In contrast, the genetic algorithm [Holland 1975] is a very efficient tool when we don't know everything about the structure of our system (and we usually don't), but we know or suspect that it has been under a certain selection pressure. It mimics evolution by making successful agents interbreed by crossing-over of their "genetic" code. Although it seems at first glance to produce a lot of overhead, the genetic algorithm has proved surprisingly efficient in finding good (but not optimal) solutions to hard problems [Mitchell 2009].

4. Biology

Talking about a complex systems approach in biology sounds a bit like a tautology. As the reader will have noticed, the complex systems approach intends to build over the tremendous explanatory power that the theory of evolution has brought to what might be the most complex of all of nature's phenomena, life.

So what can complex systems give back to biology ? In a widely cited work, complexity guru Stuart Kauffman has argued that evolution is not playing in solo, and that non-evolutive self-organizing processes had an essential role to play in the genesis, as well as in the flourishing of life on Earth [Kauffman 1993]. But while the significance of Kauffman's findings remains quite controversial, there is one fact in the theory of evolution that has come as a relative surprise at the time, and that the theory of complex systems would have clearly predicted had it already been laid down.

It is Gould and Eldredge's observation of punctuated equilibrium [Eldredge 1972]. What they discovered in the fossil record was that the rate of evolution was not more or less constant modulo the statistical noise, as had been conjectured a little lazily. In contrast, much of the biological diversity has occurred in sudden boosts, the most famous of them being the Cambrian explosion. But that is exactly what one would have expected from the assumption that the speciation events followed a power law.

5. Earth Science

I have already mentioned the power law found in the distribution of sismic events across magnitudes. The fact that the same distribution has been found in simulations of avalanches in a pile of sand suggests a common mechanism of accumulation, rather than a simple coincidence. [Bak 1996]

But the full-blown complex system approach began with Lovelock and Margulis' Gaia Hypothesis [Lovelock 1974]. Immensely controversial at first, that hypothesis took some adjustements and a simple rebrand under the more politically correct "Earth System Science" in order to get recognized as a promising avenue for research.

The idea is that in normal times the global conditions of the Earth system (temperatures, chemical compositions) vary much less than they would vary in the absence of life. That is due to various homeostatic (regulating) feedback loops between the ecosystem and its environment. In addition to some specific feedback loops having been showcased, more and more realistic computer models have been designed to show how such welcome feedback loops can appear with organisms subject to natural selection, even though none of the individual organisms directly increases its fitness by its regulating effect on its environment. [Downing 2004]

6. Economics

Economics seems to be badly in need from a complex system approach. This is because economics has a problem. For the sake of solving the maths involved, its mainstream theories have traditionally worked under a bunch of less than realistic assumptions. The rationale behind it was that working with a decent approximation of reality should yield to a decent approximation of how the economy works. But chaos theory strongly disagrees with that last assumption, as is known since quite some time, thanks for example to the Second Best Theorem [Lipsey 1956] and the famous butterfly effect [Lorenz 1963].

So what do complex systems have to offer ? I have already mentioned some quite important power laws (about stock markets and the distribution of wealth), and following is the outcome of a nice simulation producing endogeneous oscillations in the stock market [Farmer 2004].

The agents of that model are technical traders. A technical trader is someone who doesn't care about the news (external outputs) and who concentrates on finding and exploiting patterns of information they find in the market. According to the classical theories, the exploit of such patterns is needed in order to correct the errors of the market and to put it back in equilibrium. In that view, Farmer's simulation first ran exactly as expected, but after the market's initial inefficiencies more or less leveled out, wild oscillations began to appear, apparently due to each technical traders trying to exploit the information signal created by the others. Any resemblance with actual persons or facts is, of course, purely coincidental...

7. Psychology and artificial intelligence

The brain is a staggeringly complex system, and it would be quite fatuous to claim for spectacular successes for complex systems in that field yet. Nevertheless, the computer modeling of the mind, aka artificial intelligence, has long been the typical approach of cognitive psychology, like in Marvin Minsky's "Society of the Mind" model, a typical complex system of relatively dumb agents exhibiting complex emergent behaviors. [Minsky 1987]

But the techniques of complex systems have touched the modeling of behaviorist artificial intelligence as well. That is a surprising twist since the behaviorist black-box approach is in many ways the converse of the complex systems approach.

The idea of a behaviorist AI is that we shouldn't make any assumption of how the brain is made, but should design an AI system that learns by reinforcement, as the real brain is known to learn. Funnily, while a head-on approach of behaviorist AI seems to lead to intractable performance problems [Tsotsos 1995], a much more efficient solution to behaviorist AI is given by... neural networks evolving through the genetic algorithm ! [Miikkulainen 2007]

On a more speculative note, consciousness is a prime candidate for an emergent phenomenon, since it seems to consist in integrating a lot of low-level processes into an experience that feels like one single impression. An elegant theory by the late Francis Crick (yes, the DNA guy) and Christoph Koch, backed by at least some neurological data, suggests that consciousness could be an emergent pattern of coalescing brain waves (i.e. patterns of synchronized neuron firing). [Crick 2003]

8. Fundamental Physics

Complex systems as the key to fundamental physics ? Have I lost my mind ? For sure, the following section is beginning to touch the pure science-fiction. If it exceeds your tolerance to speculation, then consider it as a pretty thought experiment.

The complex systems approach has permeated physics through the study of phase transitions and of thermodynamically open systems, but here I will be going with a much more ambitious idea called "Pancomputationalism" or "Digital Physics".

Digital Physics works under the assumption that, although we experience it as continuous, the world is in fact digital at its finest scale, usually conjectured as being the Planck scale, 20 orders of magnitude below the world of fundamental particles. That means that the world resembles - or is, if you are a theist - a simulation by a gigantic super-parallel computer, with some abstract computation taking place simultaneously at every place of the Planck-scale world. In that model, all the things we know of the world, including space, time, and the particles of quantum physics, would be emergent patterns of that ur-computation.

My competence at discussing the foundations of physics will rapidly hit the wall if I go further in the details of Digital Physics. But what I really want to show here is how a very nice computer simulation is able to evolve emergent particles (kind of).[Das 1994]

Image

The picture above represent the run of a particular one-dimensional cellular automaton (see section 1). The horizontal lines represent the state of the automaton, and the vertical axis represents time running from top to bottom.

That cellular automaton has been evolved through the genetic algorithm in order to solve a problem called the "majority rule". Out of an random original configuration of white and black cells (at the top), it produces a string of black cells at the bottom because the original configuration had more black cells than white ones. The automaton is effectively performing a large-scale vote only by interactions between direct neighbors. If you think that it was an easy problem, well, I can tell you it wasn't.

Image

But the most interesting thing is how the automaton does it. Here is again the same picture, but with the emphasis on the frontiers between the homogeneous zones. With some stretch of imagination, the diagram above could be interpreted as the space-time diagram of real physical particles moving and colliding in a one-dimensional space.

Of course, nobody is even remotely suggesting that the example above reflects what real-life particles are made of. But if we push the analogy, we could at least imagine that real-life particles could also be emergent phenomena of an evolved digital physics model computing at a finer scale. And conveniently, physicist Lee Smolin has already proposed a theory of the universe as having evolved under a selection pressure (namely, for maximization of the number of black holes). [Smolin 1999]

9. The future

The main question with the complex systems approach is, will it work ? Will it go beyond a mere description of coincidences ?

Complex systems theory is not the first crazy about systems that the 20th Century has enjoyed. Wiener's cybernetics, control theory and general systems theory have already tried to consider the complex systems as objects of cross-disciplinary study. While those approaches have had some applications in engineering or management, not much of what we would call real scientific knowledge has been thus obtained.

So why would we think that the 21st century would be different ? The obvious answer is Moore's law and the tremendous boost in computing power that we are experiencing. Recall that by their structure where a myriad of agents are making the decisions in the same time, complex systems are very demanding computation-wise, even more so for the serial von-Neumann architecture that modern computers are only slowly beginning to depart from.

Of course, running the simulation is one thing, and getting a theory out of the results is a very different one. Though being put to very good use by today's scientists, the advent of our fantastic computing power is quite a new actor in science, and I doubt that we already have every epistemological tool for extracting all we can from those bunches of numbers.

So the future of complex systems seems very uncertain from now. Which shouldn't surprise us since the scientific world is itself a very complex system, so that any projection of what the researchers in complex systems will really achieve is bound to be doubtful at best.

On the other hand, given the ubiquity and the importance of complex systems for humanity, even modest successes could lead to applications that go well beyond the wildest imaginations. And remember the power laws : the potential scope and impact of the forthcoming discoveries could be huge enough for our wild speculations to deserve consideration, even if our assessment of their probability is not that high.

Bibliography

[Bak 1996] Bak, P., How Nature Works, Copernicus, 1996

[Chapman 2002] Life's Universal Computer, http://www.igblan.free-online.co.uk/igblan/ca/

[Crick 2003] Crick, F., Koch, C., A Framework for Consciousness, Nature Neuroscience, 2003

[Das 1994], Das, R., Mitchell, M., Crutchfield, J.P., A Genetic Algorithm Discovers Particle-based Computation in Cellular Automata, 1994, in Davidor & al, Parallel Problem Solving from Nature, Springer, 1994

[Dennett 2003] Dennett, D., Freedom Evolves, Penguin Books, 2003

[Downing 2004] Downing, K., Gaia in the Machine, in S.H. Schneider et al., Scientists Debate Gaia, The MIT Press, 2004

[Eldredge 1972] Eldredge, N., Gould, S.J., Punctuated Equilibria: an Alternative to Phyletic Gradualism, in T.J.M. Schopf, ed., Models in Paleobiology, Freeman Cooper, 1972.

[Farmer 2004] Farmer, D., Gillemot, L., Lillo, F., Szabolcs, M. Sen, A., What Really Causes Large Price Changes ?, Quantitative Finance, 2004

[Kauffman 1993] Kauffman, S., The Origins of Order, Oxford University Press, 1993

[Lipsey 1956] Lipsey, R.G., Lancaster,K., The General Theory of Second Best, The Review of Economic Studies, 1956-1957

[Lorenz 1963] Lorenz, E.N., Deterministic Nonperiodic Flow, Journal of the Atmospheric Sciences, 1963

[Lovelock 1974] Lovelock, J.E., Margulis, L., Atmospheric Homeostasis by and for the Biosphere - The Gaia Hypothesis, Tellus, 1974

[Mandelbrot 1997] Mandelbrot, B., Fractals and Scaling in Finance, Springer, 1997

[Miikkulainen 2007] Miikkulainen, R., Evolving Neural Networks, Proceedings of the GECCO conference 2007

[Minsky 1987] Minsky, M., The Society of Mind, Simon and Schuster, 1987

[Mitchell 2009] Mitchell, M., Complexity: a Guided Tour, Oxford University Press, 2009

[Smolin 1999] Smolin, L., The Life of the Cosmos, Oxford University Press, 1999

[Taleb 2007] Taleb, N.N., The Black Swan, Penguin, 2007

[Tsotsos 1995] Tsotsos, J.K. : Behaviorist Intelligence and the Scaling Problem, Artificial Intelligence 1995

[Zipf 1935] Zipf, G.K., The Psychobiology of Language. Houghton-Mifflin, 1935
Last edited by palindnilap on Apr 11, 2011 8:29 am, edited 2 times in total.
palindnilap
RS Donator
 
Posts: 509
Age: 47
Male

Switzerland (ch)
Print view this post

Re: 3rd Monthly Science Writing Competition - Submissions

#19  Postby Latimeria » Apr 10, 2011 6:09 pm

Augmentation: The Availability and Acceptance of Germinal Choice Technologies

“Almost certainly, at some point a combination of scientific knowledge, technology, reduced risks, increased benefits, and societal acquiescence will cross a threshold, allowing human genetic engineering to proceed” – Lee M. Silver[1]

I wish to remind the reader before beginning that the spirit of this competition is to predict, and is thus neither prescriptive nor proscriptive in its intent. I feel this reminder is necessary primarily because many of the possibilities that will be raised quite often inspire strong reactions regarding the wisdom or ethical considerations of particular courses of action. As such, it would be a foolhardy endeavor to provide thorough reasoning to support the forecasts made herein without cataloguing at least a few of the potential and likely arguments that will arise along the way, especially the arguments that would support my prediction, but that is not to be confused with the author’s endorsement all such arguments. Whether the prospect of germ line intervention fills you with unbridled optimism for the future of our species, or whether your reaction is far more guarded and cautious than that, there can be no doubt that our species is gaining and has already gained profound new abilities that offer us the power to guide our own evolution. Questions abound regarding how closely the aims we would intend would align with actual outcomes, and such skepticism is a healthy attribute when wielding prodigious and largely untested powers. There has been enough change and variety in the hominid lineage of sufficient enough evolutionary significance to assign a variety of taxonomic identities, and enough power contained in our developing technologies to fundamentally change our very genetic constitution. The sheer power and trajectory of our capabilities to manipulate molecular genetics has led Juan Enriquez to suggest that Homo sapiens will undergo radical enough change in its capabilities and actions to be worthy of a new title: “Homo evolutis: Hominids that take direct and deliberate control over the evolution of their species… and others.” [2] The scope of this essay is to focus on those self-directed aspects of this technology.

The use of gene therapy to manipulate somatic cells for medical purposes is already well underway. We have a variety of tools to use for this purpose, including the engineering of lentiviruses to be used as vectors for gene delivery. Harnessing the already existing power of certain viruses to sneak their way into cells and integrate genes into the chromosomes of the host cell, doctors at the Necker-Enfants Malades clinic in Paris were able to successfully deliver a functional allele to serve the function of a mutant allele in patients with X-SCID in 2000. This genetic disease, which disables the immune system and manifests a host of symptoms similar to AIDS, was remedied successfully by delivering the gene necessary to fix the problem, making it a notable early success in therapeutic genetic medicine. However, the lentiviral delivery vector did not demonstrate enough site-specificity in precisely where the genes were integrated into the chromosome. As a result, normal genes were disrupted, in a process called insertional mutagenesis, and in a few cases this disruption of other genes actually caused leukemia, which tainted the success with tragedy [3]. The work of Mario Capecchi and other scientists has looked to circumvent this problem by using the natural process of homologous recombination to safely and precisely integrate a desired gene into a chromosome at a particular location. Aaron Klug and his colleagues at Cambridge University then began engineering proteins, called zinc finger nucleases, which “search out the desired DNA sequence and home in on it like a guided missile, increasing the efficiency of homologous recombination a thousand fold.[4] This approach has since been used successfully on many different organisms, including mammals, for genetic manipulation. The recent news of successful in vitro spermatogenesis [5] might provide even more efficient and safe methods of genetic engineering through the germ line.

To further establish the trajectory of human germ line engineering in order to tentatively project their course, between the present and the beginning of the 22nd century, let us look at what is currently available and has been accomplished. Pre-implantation Genetic Diagnosis (PGD) has been brought sharply into the public consciousness by science fiction such as the movie Gattaca, and the entrance of the term “designer babies” into common parlance. Gattaca serves as a compelling Orwellian warning of dystopic possibilities in the future, and underscores the fact that our genes are not the sole determinants of our fate. Our environment and our choices will always play an important role, including many of those complex areas where we might seek to manipulate our biology.

Still, the use of existing in vitro fertilization (IVF) techniques typically produces more viable embryos than will be implanted, and the use of PGD to perform genetic screening using a combination of biopsy procedures can offer informed choice and the possibility of selecting which embryo(s) to implant. Particularly for those with a family history of a single-gene Mendelian disease with very high penetrance, PGD can be a tantalizing option whether as an adjunct to infertility treatments in progress or as an elective procedure unrelated to infertility. Image
Figure 1: Cell removal from embryo for biopsy [6]

If a woman intending to have a child is a known carrier for Lesch-Nyhan Syndrome, will the assorted governments of the world all tell her that she must roll the dice with traditional conception and possibly give birth to a child who will undergo a short life of unimaginable suffering, or will some allow her the choice to avoid this? Certainly a ban would not exist everywhere; arguments for human compassion and even a new moral imperative to protect children using assisted reproductive technologies will win the day in some arenas. PGD has already been approved by the Human Fertilisation and Embryology Authority in Britain to allow couples to avoid passing a variety of single-gene disorders to their children. The list now includes Beta Thalassaemia, Cystic Fibrosis, Duchenne Muscular Dystrophy, Huntington's disease, Haemophilia, and a variety of genes that can predispose individuals to developing cancer. [7]

But the possibilities of germline intervention go far beyond the prevention of transmitting harmful alleles. PGD could be used to screen for “savior siblings” in which HLA matching is the goal of the PGD. This could, for example, allow parents to have a child that is a tissue-match with an older sibling suffering from a disease treatable by hematopoetic stem cell transplantation. It could be used for gender selection, and many other phenotypic characters as well.

Those outcomes possible for these PGD technologies discussed thus far would still be direct inheritance of all gene variants directly from both parents, just a conscious control of precisely which variants. However, a Science Magazine editorial in 2001 alerted a wider audience to a subtle form of germ line engineering that has already taken place dozens of times in assisted reproductive technology which technically provides the offspring with genes from three parental sources. By using ooplasm from a donor egg, usually to increase the viability of an older woman’s eggs, the mtDNA from a third party is actually transferred to the offspring. At a glance, this may seem relatively innocuous, but it does symbolize a significant barrier that has already been crossed, even if done “inadvertently” as the editorial title suggested. It ought to make one pause and reconsider just how many biological parents a child could have, and even whether biological parents must meet the traditionally assumed gender requirements for reproduction.

But here is where things begin to get really interesting. In 1997, researchers reported the successful creation and maintenance of an entirely synthetic chromosome that was added to a human fibrosarcoma line and remained stable and active for six months. [8]

Image
Figure 2: The arrow in the above figure points to a synthetic human microchromosome.

If a stable synthetic chromosome were added to an embryo, this could theoretically contain any genetic information that the engineers of the chromosome wish to put inside, which could be engineered to be control gene expression, and even in a tissue-specific manner. It would have a large capacity for information (larger than other available vectors) and would not require insertion of genetic material into any existing chromosomes. In fact, these genes all being in one place could make it quite easy to replace and upgrade synthetic chromosomes during reproduction each generation. If prostate cancer develops, a few genes on your artificial chromosome might alert you to the medical problem by turning your urine blue. A medical professional could then trace the cause and administer a chemical, otherwise inert, to activate a synthetic cell receptor and initiate programmed cell death of the unhealthy tissue.
Sometimes individuals discussing genetic engineering applied to humans set up a table with two dichotomies: Somatic versus germline engineering, and therapy versus enhancement. The line between therapy and enhancement is a blurry one at best. Is it prevention or enhancement when parents intervene to avoid disease-related genes with only moderate penetrance in late adulthood? If parents wish to transmit a variant of the CCR5 gene, which has been shown to confer immunity to HIV, in which category does that fall? We could discuss these issues as mere possbilities, but a projection regarding what will actually occur would be wise to take into account the relative degree of public support for using PGD for various purposes. Figure 3 shows the results of polls conducted in 2006 through The Johns Hopkins University to uncover public attitudes towards the application of PGD for a variety of purposes among Americans.

PGD Public Opinion Poll.JPG
PGD Public Opinion Poll.JPG (18.47 KiB) Viewed 29397 times

Figure 3: Hudson. PGD: public policy and public attitudes. Fertil Steril 2006. [9]

Perhaps, as Ronald Green has defended at length, many of the earliest applications and arguments over acceptability may be related to athletics, and epitomize the obfuscation of the distinction between therapy and enhancement.[4] The use of erythropoietin (EPO) as a performance-enhancing drug has been banned by the World Anti-Doping Agency.[10] We all produce this hormone that stimulates erythrocyte production and affects the amount of oxygen our blood can carry, and it is generally acceptable practice to perform high-altitude training to stimulate this, and when an athlete like Eero Mäntyranta inherits a rare allele which has the same effect, that of course did not bar him from racking up Olympic medals in cross-country skiing. Direct injection of exogenous EPO, or the use of an engineered virus as a delivery vector to artificially deliver the EPOR gene that helped Mäntyranta, are both generally held as unacceptable in athletics. However, the inequality of the genetic lottery that contributes to athletic success does not create a level playing field. Many other complex traits, such as cardiovascular health, height, musculoskeletal frame, and lung capacity all affect athletic prowess, and vary between individuals. Some compounds which are normally considered performance-enhancing drugs, like HGH, are administered to those identified as having a particular disorder. It is important to note here that the lines drawn between what we consider to be disease states, deficiencies, “normal” phenotypes, and enhancements are extremely tenuous. If you are looking at a complex trait, such as learning disabilities, a population level analysis will give you a bell curve for standard measures of memory and intelligence. Diagnosing the lower end of the bell curve as having a disability is in essence a comparison, a diagnosis that is relative to the rest of the population’s distribution. Beginning to implement these new therapeutic strategies would itself shift the bell-curve for such polygenic traits and thus create a new “norm” for comparison. Where, then, is the line drawn for calling a phenotypic state a “disorder”?

Many of these methods have been demonstrated to be attainable, but the cost of genetic screening is on course to be affordable to an increasingly larger segment of the population. While many of us are familiar with Moore’s Law as it applies to the steady exponential increase in computing power, the precipitous decrease in the cost of genomic sequencing, in large part dependent upon these changes in computing power, led Richard Dawkins to describe the trend shown below as “Son of Moore’s Law” in an essay of that title. [11] Figure 4 shows this decrease over time, and the progression towards the “$1000 Genome”, a somewhat arbitrary figure which many nonetheless envision as a “Holy Grail” in genome-based medicine because it represents a significant level of affordability.
Image
Figure 4: Cost per genome sequenced over time.

While the aims of germline interventions are not far from what the pure etymology of the word “eugenics” would suggest, well-known proponents of GCT, including Lee Silver, James Watson, Gregory Stock, and many others are careful to divorce these new capabilities from anything which has historically fallen under than moniker, and such a distinction is justified. The goals and methods of these and others in the transhumanist camp are unlike those past movements which were based on inadequate science, employing crude methods such as sterilization, imposing value judgments externally, and resulting in ethical tragedies of the highest degree. The goals and the methods employed here justify an entirely separate assessment, as the emphasis is on individual empowerment to electively participate in reproductive decisions that offer greater control while having the opportunity to be advised by the best science we have including informed consent regarding areas of uncertainty. In fact, cogent arguments regarding procreative beneficence, the moral obligation of parents to use options in reprogenetics to improve the health of their children, such as those advanced by Julian Savulescu, have put forth a serious challenge to those who merely ponder whether GCT are morally permissible. The question can become whether we must recognize a new moral imperative.

With this individual empowerment of choice, it is not unreasonable to assume that in certain areas the choice will lead toward uniformity, both in terms of what is preferred and what is intentionally avoided. We may well see the basic phenomenon of the Tragedy of the Commons move from a more traditional place in the shared access to aquatic biodiversity to the collective gene pool of our species and the allelic variety present. In fact, some might advocate for what I could only term an organized allelicide which could theoretically remove a strictly Mendelian disease from the human gene pool in a single reproductive generation. Yet while diversity drops in some areas, it can be greatly enhanced in others.

Even an optimist like Gregory Stock must acknowledge that such a transition will not be glorious in all aspects, saying, “Humanity is moving out of its childhood and into a gawky, stumbling adolescence in which it must learn not only to acknowledge its immense new powers, but to figure out how to use them wisely.”[12] Careless use of these technologies has of course led many to predict a torrent of problems arising from indiscretion: not least among these are a gender imbalance in society, discrimination and arrogance borne of genetic elitism, fundamental changes in parent-child relationships, and various unintended biological tragedies from meddling with the unknown. This essay does not attempt to assert that problems will not arise, only that such actions will be taken. There are often unexpected tragedies from any emerging technologies, as we saw the initial success of gene therapy for X-SCID tainted with the occasional tragedy of inducing leukemia. But the occasional tragedy will not stop it; we will learn from our mistakes and continue to “upgrade” our genomes.

Our biodiversity has never been static, and there is no evolutionary reason to imagine that the human gene pool represents a state of culmination upon which improvements cannot be made. How much of the opposition to germline engineering is simply a status-quo bias, based only on intuitive perceptions of what is “normal” and “natural”? It is not unreasonable to assume that a portion of that status quo bias will dissipate with the proliferation of those procedures which are already allowed. Advancements in biomedical technology will most likely make for a more inclusive list of what is deemed acceptable. Increased availability and usage will most likely change perceptions of what is “normal”, and somewhere down the line, new decisions about germline engineering will be made by generations comprised of individuals who were, more often than not, conceived in a laboratory. In a paper titled “Daedalus of Science and the Future”, J.B.S. Haldane wrote of advances in biology being initially regarded as indecent and unnatural, but that, “The biological invention then tends to begin as a perversion and end as a ritual supported by unquestioned beliefs and prejudices.” [13] And this may well be prophetic even for human reproduction, which may routinely take place in the laboratory for greater efficiency and safety in the future. But not to worry, sexual intercourse will not disappear, it will remain behind to serve other… vestigial purposes. Although the graph in Figure 3 shows more support for PGD among men than women, it might well be career-oriented women, wishing to start a family at a later stage in life, who will push reproduction into the laboratory with more frequency.
Even now, opponents of this technology, or those calling for stricter controls, recognize the extreme possibilities. Bioluminescent skin, which has been created in a host of transgenic animals, including the germ line of primates [14], has become a talking point to highlight absurd and seemingly heretical notions of how we might modify ourselves.

Image
Figure 5: A transgenic marmoset (Callithrix jacchus) expressing Green Fluorescent Protein

It would not surprise me in the least if a small group of fervent advocates for unrestricted procreative liberties stands up to say, “Why can’t I make my skin glow if I so choose?” But the envelope will not be pushed by tinkering for frivolous or ostentatious displays of genetic liberty; instead it will be pushed the furthest by manipulations where the desired outcome is of obvious benefit. Where it is allowed and available, the choices will be made by individuals, but the net outcome can be viewed at the population level.
And now to enter the realm of the truly speculative, to push the boundaries based on assumption that advances in the holistic approach of systems biology, genomics, and proteomics will become informative enough to offer some guidance in dealing with pleiotropy, polygenic traits, and more clear and comfortable answers in the "nature-nurture" debate. It will be a difficult road, but with the pace of biomedical technologies is it inconceivable that we may in the future confidently make informed decisions about traits as complex as intelligence? We could perhaps even add a part to our brains, already named an “exocortex” by some futurists, to process and store vast amounts of information. Might we adapt our biology to be more easily integrated with technology? Might we give ourselves the ability to manufacture bacteriophages to augment our immune response, or produce the same type of endogenous antibiotics found in crocodilian blood? Might we extend our lifespan, stave off the effects of aging, keep our minds sharper and our bodies more capable? Is the limb regeneration seen in the axolotl an ability we could give to ourselves? We could conceivably work to optimize our biochemistry, give ourselves sensory capacities that we currently lack, improve upon the strength of our muscular and skeletal systems, and make our skin more durable. Will we rid ourselves of genetic components of asthma, diabetes, heart disease, and cancer? Perhaps we could even go so far as to encrypt our genome so that our cellular machinery cannot be hijacked by viruses. Some day we might be able to use computers to move beyond the “copy-paste” version of genetic engineering, and design a functional protein de novo, one that has no natural homologue, and then string together the nucleotides that will create it. Perhaps our cells could be engineered to contain microscopic biochemical “doctors” to diagnose, signal, and even treat pathologies as they develop. If our species is to colonize the galaxy, can we alter ourselves to allow efficient cryopreservation to withstand the journey? With increased years of healthy longevity and the elimination of many diseases, we could be productive with a much greater percentage of our lives. With the associated economic costs of so many treatments vanished, how much more of our resources, both human and material, might be devoted to other ventures? With the limits of human capacity no longer set purely by natural inheritance, but supplemented by our own choice, the very nature of what it means to be a human may have to be radically and continually altered to account for the actual fundamental changes that occur as our species crosses the threshold into participatory evolution.

WORKS CITED:

[1] Silver, Lee M. Challenging Nature: The Clash Between Biotechnology and Spirituality. Ecco/Harper Collins, 2006. Print

[2] Juan Enriquez TED Talk http://www.ted.com/talks/lang/eng/juan_ ... ience.html

[3] Thrasher, et al. Gene therapy: X-SCID transgene leukaemogenicity. Nature. 2006 Apr 27.

[4] Green, Ronald M. Babies by Design: The Ethics of Genetic Choice. New Haven and London. Yale University Press, 2007. Print.

[5] Sato, et al. In vitro production of functional sperm in cultured neonatal mouse testes. Nature. 2001 Mar 24.

[6] Image from Institute for Reproductive Health, Cincinnati OH.

[7] Human Fertilisation and Embryology Authority Website: http://www.hfea.gov.uk/756.html

[8] Harrington, et al. Formation of de novo centromeres and construction of first-generation human artificial microchromosomes. Nature Genetics. 15, 345 - 355 (1997)

[9] Hudson, Kathy L. Preimplantation genetic diagnosis: public policy and public attitudes. The Genetics and Public Policy Center, Berman Bioethics Institute, The Johns Hopkins University, Washington, DC

[10] World Anti-Doping Agency Website:
http://www.wada-ama.org/en/About-WADA/H ... ti-Doping/

[11] Brockman, John, ed. Richard Dawkins: Son of Moore’s Law. New York: Vintage Books, 2002. Print.

[12] Stock, Gregory. Redesigning Humans: Choosing our Genes, Changing our Future. Boston and New York: Houghton Mifflin Company, 2003. Print.

[13] Haldane, J.B.S. Daedalus of Science and the Future A paper read to the Heretics, Cambridge on February 4th, 1923. Sir William Dunn Reader in Biochemistry. Cambridge University. New York. E. P. Dutton & Company. 681 Fifthe Avenue.
Available at http://www.3nw.com/energy/h2/daedalus.pdf

[14] Stein, Rob. “Monkeys first to inherit genetic modifications.” Washington Post 28 May 2009.
" [This space is for rent to "which ever version of POOF creates the largest cloud of obnoxious smoke following the POOF."[1] "- God
Works Cited:
[1] - theropod. Parsimony of the Miraculous. RatSkep Peanut Gallery Press, 2011.
User avatar
Latimeria
RS Donator
 
Posts: 1083
Male

United States (us)
Print view this post


Return to General Science & Technology

Who is online

Users viewing this topic: No registered users and 1 guest