A new RatSkep Science Writing Competition!


Anything that doesn't fit anywhere else below.

Moderators: kiore, The_Metatron, Blip

A new RatSkep Science Writing Competition!

#1  Postby Mazille » Feb 15, 2012 7:28 pm

It's that time again, people! Gather what neurons you can muster and type your fingers bloody!

This month's topic is: Discuss a scientific hypothesis that failed

How were those hypotheses shown to be incorrect? Who worked on them and why? How did people, the scientific community, society as a whole react? Explain it to us, and have a crack at winning this competition!

The competition starts as of now. Participants can post their entries in this "Submissions" thread, while everyone else is cordially invited to comment on the relative merit of the entries in the "Discussion" thread here.

Submissions can be entered until Sunday, the 1st of April, which is when the voting will start. Voting will end on Sunday, the 15th of April.

Articles have to meet all the criteria laid down in the rules here:

The Monthly RatSkep Science Writing Award

We have a lot of professional scientists and very well-versed laymen on the forum and so we decided to make use of those formidable intellectual resources. We challenge you to write an article about a specific topic - which will be revealed later on - and enter it into a competition for "The RatSkep Science Writing Award"!

Now, to give you an idea of how this competition is going to work:
Every month we will give you the opportunity to take part in this competition. The goal is to write the best article covering a scientific topic of your choice - although with certain restraints. For each round of the competition we will set a general topic (e.g. "Our Solar System", or "The Subatomic World"), from which you can choose any field of interest to write about. After we have announced the general topic of a new round, competitors will have three weeks time to write their articles and enter them in the competition (see below for formal criteria) and after those three weeks users will have another week to vote for the best scientific article.

How is entering the competition and voting going to work?

1. We will have one thread where people can post their articles and enter them in the competition. This thread will be moved from public view after each round of the competition and a new one will be opened for the next round. Only competitors may post there, and the articles will have to be approved by the staff, just like in the Formal Discussion forum.
2. We will have another thread, where users can argue about the merits of each article that entered the competition and where they will be able to vote for their favourite article in the last week of the round via a poll. Every member will have the ability to cast three votes.
3. There will be a third thread, where we collect all the articles that ever entered the competition. This way you will have access to a whole thread full of scientific goodness.
4. After the general topic of a round has been announced, participants will be given three weeks to write and submit their works. Within this time the commentary-thread will be open for relevant discussions, but voting will still be disabled. After these three weeks users will be given one week to submit their votes. At the end of this period the winners will be announced and the submitted articles will enter the "Hall of Fame"-thread. Shortly after that a new round of the competition with an entirely different general topic will start.
5. Each participant may only enter one article per round into the competition.

What are the formal criteria for the articles?

1. Every article you enter into the competition has to be your own original work. Here on RatSkep we do not look kindly on plagiarism. It is, however, allowed to enter the competition with an article you have already posted earlier here on RatSkep, provided it meets the rest of the following criteria.
2. Articles have to be at least 500 words long and mustn't exceed a limit of 3000 words. The maximum number of pictures and graphics is one picture per 500 words of text.
3. Articles have to include an index of their sources, if you used any, and direct quotes have to be credited to the original author. We don't want to impose a specific quotation system on the competitors, but keep it clear, easily readable and stick to one system per article.
4. Articles must cover either the general topic, or an appropriate sub-topic to ensure comparability of your efforts.
5. Of course, the articles will have to be in the limits of the FUA, as usual.

Any article that does not meet all of the above criteria will be disqualified and cannot be entered in the competition.

Why should I enter the competition?

First of all, this is the perfect opportunity to boast with your superior knowledge. Furthermore the winners will get some shiny stuff:

1. All submitted articles will be featured in our Hall of Fame thread.
2. The best entries will also be featured in a prominent spot on our shiny new front-page as soon as LIFE manages to get it up and running.
3. And last, but not least, we might have a few surprises in store for you...

Good luck and have fun! May the best articles win. :cheers: We are looking forward to your contributions.
- Pam.
- Yes?
- Get off the Pope.
User avatar
RS Donator
Posts: 19741
Age: 36

Austria (at)
Print view this post

Re: A new RatSkep Science Writing Competition!

#2  Postby Calilasseia » Feb 16, 2012 6:32 am

Gravity As A Resultant Of Rotation

Gravity is a phenomenon that has exercised many scientific minds over the past 300 years, and even today, a truly unified view of gravity is proving, for a range of reasons, to be somewhat elusive. It is not surprising, therefore, that one or two off-the-wall ideas have arisen, and it is one of those ideas, along with its concomitant failure, that I intend to present here.

This idea was the brainchild of one Patrick Maynard Stuart Blackett, a physicist of considerable renown, whose work encompassed such diverse fields as the physics of cloud chambers, operational research, palaeomagentism, and the behaviour of cosmic rays. A protegé of none other than Ernest Rutherford, Blackett was one of the first people to provide evidence of transmutation of atoms under conditions of intense neutron flux, and thanks to his work with Giuseppe Occhialini, he became one of the first physicists to understand the nature of antimatter.

However, whilst he enjoyed a number of successes, one of his ideas proved to be a failure. It led to him moving on to develop high-quality and high-sensitivity magnetometers, for measuring extremely small magnetic fields, and in turn in his later career to produce notable results in the field of geophysics. In that sense, the idea in question was not a complete failure, but it failed with respect to its ambitious goal, and this failed hypothesis is the topic of my article.

I refer everyone to the following paper:

The Magnetic Field Of Massive Rotating Bodies by P. M. S. Blackett, Nature, 159: 658-666 (17th May 1947)

In this paper, Blackett expounded upon the existence of a relationship between magnetic moment (represented in the following equations by P) and angular momentum (represented in the following equations by U). Measurements taken for a small number of planetary and stellar bodies, led to the possibility that a general law existed, coupling P and U for all rotating bodies. The values were found to be a close fit to the following equation:

P = (βU/2c)G½ (in cgs units)

where β is a constant whose value is close to 1, c is the speed of light, and G is Newton's constant of universal gravitation. The reason that this equation was of interest, was because it admitted of an interesting rearrangement, of the form:

G = (2Pc/βU)2

and thus presented the interesting possibility that gravity and magnetism were nothing more than a consequence of angular momentum. If true, then this would revolutionise physics, and consequently, Blackett set out to discover whether or not the original relationship was merely coincidence, or a pointer to a general and universal physical phenomenon.

The problem, of course, is that gravitational fields are extremely weak. Likewise, any magnetic field emanating from a rotating body, according to this idea, would also be extremely weak, unless that rotating mass was of planetary size or beyond. Since determining the emergence of an increased gravitational field around a rotating mass in the laboratory was simply beyond the scope of any empirical test, Blackett opted to search for a magnetic field instead, one emanating from a body previously possessing no intrinsic magnetic field, but which, if the above equation was indeed a universal law, would emerge once the rotating body possessed sufficient angular momentum. Blackett determined that sufficiently sensitive instruments would detect such a magnetic field, provided that the body being used in the experiment was [1] of high material density, and [2] given a large angular velocity, resulting in large angular momentum, and a magnetic field that, whilst weak, would be detectable beyond the bounds of experimental error.

Thus the stage was set for Blackett's experiment. Which is covered in this paper:

A Negative Experiment Relating To Magnetism And The Earth's Rotation by P. M. S. Blackett, Philosophical Transactions of the Royal Society of London Part A, 245: 309-370 (16th December 1952) [Abstract available here]

Blackett, 1952 wrote:The discovery by Babcock of the magnetism of certain rapidly rotating stars led me to study the hypothesis, first clearly discussed by Schuster and by Wilson, that the magnetism of rotating astronomical bodies might be due to some new and general property of matter. The well-known theoretical difficulties attending such a view were matched by the difficulty of finding a quantitative explanation of even the earth's magnetic field in terms of the known laws of physics. A detailed study of the possibility of making a direct test of the Schuster-Wilson hypothesis, by measuring the very small magnetic field of the order of 10-9 G which would be produced by a rotating body of reasonable size in the laboratory, led me to conclude that the experiment would perhaps be possible but would certainly be exceedingly difficult. However, a much easier but still worth-while subsidiary experiment presented itself. This was to test whether a massive body, in fact a 10 × 10 cm gold cylinder, at rest in the laboratory and so rotating with the earth, would appear to an observer, also rotating with the earth, to produce a weak magnetic field with a magnitude of the order of 10-8 G. That such a field might exist is a plausible deduction from a particular form of the Schuster-Wilson hypothesis considered in some detail by Runcorn and by Chapman. This paper describes the design, construction and use of a magnetometer with which this 'static-body experiment' was carried out. Since few detailed studies of the design of sensitive magnetometers to measure steady fields appear to have been made since the days of the classical experiments of Rowland and of Eichenwald, I found it necessary to investigate the theory and use of such an instrument in considerable detail. The bulk of this paper, that is, Section 2 to 5, is concerned with this instrumental study. The actual static-body experiment is described in Section 6. and it is there shown that no such field as is predicted by the modified Schuster-Wilson hypothesis is found. This result is in satisfactory agreement with the independent refutation of the hypothesis by the measurements by Runcorn and colleagues of the magnetic field of the earth underground. When the magnetometer was completed it was found to be very suitable for the measurement of the remanent magnetism of weakly magnetized specimens, in particular certain sedimentary rocks.

So, the experiment was considered a failure, with respect to the matter of establishing a causal link between gravitation, magnetism and angular momentum. Blackett, having determined the result to be a failure, then moved on to the matter of using his new, extremely sensitive magnetometers to perform geophysical research, as the last sentence of the abstract above alludes to. Blackett decided that further pursuit of this idea was pointless, and considered the matter settled by the above 1952 paper.

Why is this failed experiment of such interest? First, it became a focal point for the later writings of the science fiction author James Blish, whose monumental Cities in Flight series features an anti-gravity drive based, purportedly, upon the rearrangement of the Blackett equation, and the realisation thereof in machinery. In Blish's fictional universe, Blackett's work is subject to re-evaluation, courtesy of a truly bizarre experiment, involving the erection of a solid platform, situated, of all places, on what is very loosely termed the 'surface' of Jupiter, an engineering undertaking involving the construction of an artefact approximately the size of metropolitan Los Angeles, amidst the turmoil of Jupiter's extremely hostile atmosphere!

Second, the hypothesis in question has become, in more recent years, another of those 'fringe' theories, resurrected from its grave and wheeled about on castors, by various individuals, no doubt salivating at the thought of cooking up an anti-gravity drive in the living room. That they stand even less chance of achieving this aim, than I do of being the next occupant of Scarlett Johanssen's bed, apparently deters them not. Doubtless both Blackett and Blish would, if they were still alive, view with a certain puzzled amusement, the spectacle of various individuals beavering away to try and make the spindizzy a reality, which would be wonderful if it were achievable, but, Blackett thought better of the matter in 1952, and decided to choose the path of paying attention when reality told him that an idea was not worth pursuing. An object lesson that sadly, will almost certainly not be learned by the very people who need to.


[1] The Magnetic Field Of Massive Rotating Bodies by P. M. S. Blackett, Nature, 159: 658-666 (17th May 1947)

[2] A Negative Experiment Relating To Magnetism And The Earth's Rotation by P. M. S. Blackett, Philosophical Transactions of the Royal Society of London Part A, 245: 309-370 (16th December 1952)
Signature temporarily on hold until I can find a reliable image host ...
User avatar
RS Donator
Posts: 22280
Age: 60

Country: England
United Kingdom (uk)
Print view this post

Re: A new RatSkep Science Writing Competition!

#3  Postby willhud9 » Feb 16, 2012 3:18 pm

Inheritance of Acquired Characteristics


The inheritance of acquired characteristics, or Lamarckism, was a leading hypothesis explaining the passing on of physical traits from individuals into its offspring. This hypothesis was popular since Ancient Greece, was held to be true by Aristotle and Hippocrates, and was believed to be biological fact up until Charles Darwin did his research and developed his Hypothesis of Natural Selection. In 1809, biologist Jean-Baptiste Pierre Antoine de Monet, Chevalier de la Marck, today known as Lamarck, developed his hypothesis which clarified and defined Lamarckism. [1]

Lamarck having studied invertebrates and botany for most of his life was intrigued by the vast diversity of life around him. In fact, Lamarck is a forerunner to defining modern evolutionary theory and although wrong in his hypothesis would be a major influence on Darwin and his research into evolutionary mechanisms. [2]

What the Hypothesis Stated

Lamarck’s hypothesis in his book, Zoological Philosophy, held several key points or “laws” explaining the reasoning how physical traits were passed from parent into offspring.

“[First Law] In every animal which has not passed the limit of its development, a more frequent and continuous use of any organ gradually strengthens, develops and enlarges that organ, and gives it a power proportional to the length of time it has been so used; while the permanent disuse of any organ imperceptibly weakens and deteriorates it, and progressively diminishes its functional capacity, until it finally disappears.
[Second Law] All the acquisitions or losses wrought by nature on individuals, through the influence of the environment in which their race has long been placed, and hence through the influence of the predominant use or permanent disuse of any organ; all these are preserved by reproduction to the new individuals which arise, provided that the acquired modifications are common to both sexes, or at least to the individuals which produce the young.”

These laws would be simplified in textbooks as 1) use and disuse and the 2) inheritance of acquired traits.

Lamarck concluded that nature was the designer of animals, plants and their diversity. He also concluded, in accordance with his laws, that it was the environment which caused species to inherit traits. Lamarck in his Zoological Philosophy writes, “Nature (or her Author) in creating animals, foresaw all the possible kinds of environment in which they would have to live, and endowed each species with a fixed organization and with a definite and invariable shape, which compel each species to live in the places and climates where we actually find them, and there to maintain the habits which we know in them” In simpler words, Lamarck believed that all creatures’ traits were best suited to a specific environment from “creation” and that when changes to the environment occurred; the individual animals in the species would adapt physical traits to allow it to survive.

Lamarck also had his individual conclusion which stated that there was an increasing complexity occurring in nature with regards to life, “Nature has produced all the species of animals in succession, beginning with the most imperfect or simplest, and ending her work with the most perfect, so as to create a gradually increasing complexity in their organization; these animals have spread at large throughout all the habitable regions of the globe, and every species has derived from its environment the habits that we find in it and the structural modifications which observation shows us” [3]

An example of Lamarckism would be the giraffe stretching its neck, thus strengthening the muscles around its neck, thus using the organ which increases its function. This giraffe would pass on its acquired, stronger muscles into its offspring and that giraffe would stretch its neck until you had a modern giraffe. This example follows both of Lamarck’s laws of Use and Disuse and the Inheritance of Acquired Traits. However, in modern biological science we know this example is wrong.

Falsification of the Hypothesis

Unfortunately for Lamarck, immediately after his death, two rising scientists almost shattered the hypothesis of Lamarckism. These scientists were the Austrian friar Gregor Mendel and the British naturalist Charles Darwin.

• Gregor Mendel

Gregor Mendel is well known for his experiments with pea plants, among other species, and recording his observations of inheritable traits among the species. Mendel cross-bred a white flowered pea plant with a purple flowered pea plant expecting the outcome to result in the, at the time, accepted hypothesis of blended inheritance; Mendel expected a light purple flower. Instead Mendel was given a purple flower offspring. Mendel discovered that on a consistent basis a white flowered plant bred with a white flowered plant would yield a white flowered plant, a purple flowered plant bred with a purple flowered plant would yield a purple flowered plant, and a purple flowered plant bred with a white flowered plant would yield a purple flowered plant. To continue his experimentation, Mendel bred the offspring purple flowered plants and was again astonished. Instead of getting a consistent purple flower, it would yield a 3:1 ratio purple flowered/white flowered plant. Meaning out of every four plants there was a likelihood one would be white. These experiments not only on the flower but on the pea shape and pea colour and other traits of the plant would lead Mendel to create his inheritance hypothesis.

This hypothesis had three main points. The first was inheritance of traits is factored by “units” (we call them genes, but Mendel did not know they existed), the second was one unit for each trait is inherited from each parent, and the third was a trait may not show in the individual but can show up in a later generation. Today Mendel’s experiments can be summarized by two principles (or laws): The principle of segregation and the principle of independent assortment. [4]

Wikipedia says that the law of segregation states that, “Every individual possesses a pair of alleles (assuming diploidy) for any particular trait and that each parent passes a randomly selected copy (allele) of only one of these to its offspring. The offspring then receives its own pair of alleles for that trait. Whichever of the two alleles in the offspring is dominant determines how the offspring expresses that trait (e.g. the color of a plant, the color of an animal's fur, the color of a person's eyes).”

Relating this back to Lamarckism, Lamarck argued that it was the environment which caused the individual species to adapt. Mendel discovered this was not the case, but rather the units or genes which contain the trait are responsible for the trait to show. Using the giraffe example, Mendel would have argued there was a unit which contained a trait for the long neck of the giraffe which would have been passed from parent to offspring until the long neck unit was a dominant trait. Remember Mendel did not have knowledge of DNA and unfortunately did not have communications with his contemporary, Charles Darwin.

Wikipedia also explains that the law of independent assortment states that, “Separate genes for separate traits are passed independently of one another from parents to offspring. That is, the biological selection of a particular gene in the gene pair for one trait to be passed to the offspring has nothing to do with the selection of the gene for any other trait.” This is not always the case, because modern genetics show that some genes are linked, but for the purpose of inheritance this is generally the rule.

So how does this relate to Lamarckism? Well Lamarck’s second law was that the acquired traits of the parent would pass down into the offspring. Again Mendel’s experiments show this is not the case. If Mendel were to cut off the petals of the parent flower, the offspring would not be born without petals. This flaw in Lamarckism is shown in the classic example of a Doberman Pinscher. A Doberman traditionally would have its ears cropped and its tail docked. But, its offspring would have normal, uncropped ears and a normal, undocked tail, and would grow into adult with those traits.

Not only does this show that Lamarck’s inheritability law was wrong, it shows that there has to be something beyond the physical outwardly traits that caused inheritability. Mendel’s experiments and hypothesis show that there is a “unit” which contains information of a specific, individual trait and Mendel’s experiments also show how these traits get passed and the ratio they occur within generations.

• Charles Darwin

The famous Charles Darwin is largely responsible for the coining of two modern evolutionary terms. The first is natural selection and the second common descent. Both of these refute Lamarck’s hypothesis of Inheritance of Acquired Characteristics.

Darwin, while studying at the Galapagos Islands, noticed small variations of traits within the finches on each island. Each finch seemed to be suited for the habitat and local environment it dwelled in. Lamarckism would hold that the finch adapted to the environment and passed on its trait to its offspring. Example, a finch that had a sharper beak for drilling holes into the bark obtained that sharper beak by some means, either repeated attempts to get food, or it sharpened it, and that acquired beak was passed on to its offspring whom would obtain a better beak for getting food.

Darwin would disagree. After visiting renowned ornithologist John Gould, Darwin realized that each individual species of finch are so related it is as if small changes had occurred within an original species to create the diversity. He would go on to form his hypothesis of natural selection. From Origen of Species:

“If during the long course of ages and under varying conditions of life, organic beings vary at all in the several parts of their organisation, and I think this cannot be disputed; if there be, owing to the high geometrical powers of increase of each species, at some age, season, or year, a severe struggle for life, and this certainly cannot be disputed; then, considering the infinite complexity of the relations of all organic beings to each other and to their conditions of existence, causing an infinite diversity in structure, constitution, and habits, to be advantageous to them, I think it would be a most extraordinary fact if no variation ever had occurred useful to each being's own welfare, in the same way as so many variations have occurred useful to man. But, if variations useful to any organic being do occur, assuredly individuals thus characterised will have the best chance of being preserved in the struggle for life; and from the strong principle of inheritance they will tend to produce offspring similarly characterised. This principle of preservation, I have called, for the sake of brevity, Natural Selection.” [6]

In short, Darwin believed that species acquired traits which would best suit that species more and the likelihood of that species surviving would increase. In layman’s terms this is survival of the fittest. Now Darwin knew nothing about genes, although he guessed there had to be something that triggered these changes within species. Lamarck’s idea of inheritance of physical traits was unsatisfactory for Darwin at this time.

Furthermore, Darwin did not believe each species was created by nature as fitted to its environment and would adapt, but rather believed that each species shares a common descent with another species. Darwin was so bold as to suggest all life on the planet was connected through a tree of ancestry. Of course with modern genetics we know this to be true.

To summarize, Darwin proposed two alternatives to Lamarckism. The first is natural selection as the means of inheritability and diversity of species. Instead of use and disuse, it was the possession of a stronger trait which increased chances of survival and subsequent healthy breeding. The second was that all life can be traced in an ancestry.


Today we have advanced, clarified and combined both Mendel’s hypothesis and Darwin’s hypotheses and filled the gaps with knowledge of DNA and genetics. Through stochastic mutations, which occur during DNA replication, specific proteins are changed and result in either a change in the trait or nothing life changing. If the result is a change than the change is either beneficial or harmful for the individual; either way selection will affect the organism. With our knowledge of various genomes and the gems of the fossil record, Darwin’s hypothesis of common descent was shown to be fact and that all animals, plants, fungi, protozoa, and bacteria share a biological relationship with each other. In short, the two premises of Lamarckism are refuted with today’s understanding of how species acquire traits.

Why is this important?

Several main reasons why the refuting of Lamarck’s claims are important:

1) Modern Biology would not be as conclusive.

2) Modern Germ Theory and subsequently medicine would not be as advanced

3) Many intellectually dishonest apologetics still cling to a Lamarckism-esque philosophy when debating modern evolution.

Modern biology includes a vast amount of individual subjects, but because of our knowledge of evolutionary theory, scientists know that everything in biology is connected at the molecular level and is very consistent science.

Without the knowledge of how traits are acquired, germs, bacteria, and viruses could arguably be more potent since pharmaceutics would have a weak basis of inheritance to counter the pathogen.

Creationists oftentimes claim evolution occurred in small stages of adaption (microevolution) but that God (or Nature) created the species as they were and had just adapted. Although this is not exactly what Lamarck believed, Creationists tend to think this is what the science supports.

Although Lamarckism is an insufficient explanation for the inheritance of traits in Eukaryotic life, today scientists have discovered that RNA prokaryotes can acquire traits in a fashion similar to what Lamarck proposed. In fact, epigenetics studies this. [7]


1. http://en.wikipedia.org/wiki/Inheritanc ... cteristics

2. http://www.ucmp.berkeley.edu/history/lamarck.html

3. Zoological Philosophy: An Exposition With Regard to the Natural History of Animals by Jean-Baptiste Lamarck. p. 113, 126 (1984).

4. http://www.bioinformatics.nl/webportal/ ... linfo.html


6. On the Origin of Species by Charles Darwin. p. 80-81. (2011).

7. http://www.sciencedaily.com/releases/20 ... 102713.htm
Fear is a choice you embrace
Your only truth
Tribal poetry
Witchcraft filling your void
Lust for fantasy
Male necrocracy
Every child worthy of a better tale
User avatar
Name: William
Posts: 19376
Age: 31

Country: United States
United States (us)
Print view this post

Re: A new RatSkep Science Writing Competition!

#4  Postby campermon » Feb 26, 2012 4:11 pm

Scientific hypothesese that failed, keep failing…

Humans, considered by themselves to be the smartest of apes, are natural scientists. To be more precise, all of us (whether we like it or not) from a very early age observe the physical phenomena around us and form hypotheses about how the world works.

As a species we have evolved at the bottom of a gravitational well, immersed in fluid on a rotating planet. While this arrangement can be considered fortuitous for the evolution of life (including us), it is an unfortunate environment for the young mind that is trying to do science.

I’d like to explore the idea that many overturned hypotheses appear to result in very little reaction from society as a whole and that most people will muddle on holding to their own falsified hypotheses of how the world works.

Ask a kid, or an adult….

Why can we see the moon at night?
Where do trees get the material they are made of?
How heavy is air?

If you ask these questions to enough people, you’re bound to get answers such as; “The moon makes its own light”. “Trees are made from material in the soil” and “Air doesn’t weigh anything.”

All of these answers are the product of the (falsified) scientific hypotheses we form naturally.

In our everyday experience, the only objects that appear to light up against a field of darkness, where there is no apparent external source of light, are themselves luminous. Hence, the moon ‘makes’ its own light.

Trees are massive things made out of solid stuff. They grow out of soil that is solid stuff therefore they get their stuff from the soil.

The final answer that ‘air has no weight’ is no surprise considering that our bodies have evolved in a dense atmosphere, our bodies being big bags of pressurized fluids which counter the external pressure. Thus we don’t experience this weight of air on us.

It is the discovery that air does indeed have weight that I will discuss.


Imagine drinking your favourite beverage with a straw. As you ‘suck’, air is being removed from your mouth and the liquid is drawn up the straw. So, how does this happen? Well, if you said something like ‘a partial vacuum is being created in the mouth and the liquid is being pulled (‘sucked’) in then you would be in agreement with Aristotle.

This would be an example of ‘horror vacui’ in action. In brief, this was a theory put forward by Aristotle which proposes that;

“…nature abhors a vacuum, and therefore empty space would always be trying to suck in gas or liquids to avoid being empty.” [1]

This was the prevailing view amongst the learned up until the 17th century. It was then when experimenters found that this hypothesis failed to explain observations and showed ‘horror vacui’ to be wrong.

The folly of spending time ‘in the weighing of ayre’….

As late as 1664, the King of England berated his scientists for wasting their time in the ‘weighing of ayre’;

“…the King would not lay, but cried him down with words only. Gresham College he mightily laughed at, for spending time only in weighing of ayre, and doing nothing else since they sat.” [2]

By the time that the King said this, and Pepys wrote it down in his diary, it had been established through experiment that air does have weight.

The discovery that air does indeed have weight can’t be put down to the work of just one individual, so let’s begin with a brief look at the work of Isaac Beeckman (1588- 1637) [3].

Beeckman, considered by his contemporaries to be one of the finest minds of his time, in 1618 worked out;

“…the law of uniformly accelerated movement of bodies falling in vacuo by combining his law of inertia with the hypothesis that the earth discontinuously attracts falling bodies by tiny impulses. On this basis he found the correct relation between time and space traversed: the distances are as the squares of the times.” [4]

Beeckman was also a practical investigator. It was through his investigations of fluids, air, volume and temperature that he began to doubt the aristoliean model;

“In 1626 he determined the relation between pressure and volume in a measured quantity of air, and discovered that pressure increases in a degree slightly greater than the diminution of the volume. Beeckman attributed the ascent of water in a pump tube not to the “horror of a vacuum,” nor yet to the “weight” of the atmosphere, but rather to the pressure of the air.” [4]

This is a key departure from the accepted view. Beeckman had discovered that it is the pressure of the air which pushes the water up a tube, as opposed to the vacuum pulling it, in a suction pump. Beeckman didn’t go so far as to posit that it was the weight of the air which caused this pressure, that was to come later.

Suction pumps suck..

Around the time of Beekman, it was a well known fact that a suction pump could only raise water to a height of “18 Florentine yards” [5] (about 9-10 metres). This is a rather inconvenient fact if you wish to pump water out of a mine or drink from a very long straw. That this should be, spurred scientists and engineers to develop other kinds of pumps (but I shall not go into that here). Scientists struggled to fit this experimental finding with Aristotle’s model. Even Galileo got it wrong.[5]

Enter Evangelista Torricelli (1608-1647) [6] who is credited with inventing the mercury barometer whilst investigating vacuums. His description of the device being;

“We have made many glass vessels ... with tubes two cubits long. These were filled with mercury, the open end was closed with the finger, and the tubes were then inverted in a vessel where there was mercury.” [7]


Now, because mercury is a much denser liquid than water it could easily be observed that it would reach a maximum height in the column, about 760mm [8].

From this observation, Torricelli makes the claim that;

“…the force which keeps the mercury from falling is external and that the force comes from outside the tube. On the surface of the mercury which is in the bowl rests the weight of a column of fifty miles of air. [7]

He then arrives at the correct conclusion that air has weight;

“Is it a surprise that into the vessel, in which the mercury has no inclination and no repugnance, not even the slightest, to being there, it should enter and should rise in a column high enough to make equilibrium with the weight of the external air which forces it up?” [7]

Taking this result and also recognizing that the density and temperature of air are inversely proportional, Torricelli went on to come up with the first scientific explanation for wind;

“... winds are produced by differences of air temperature, and hence density, between two regions of the earth” [7]

What we know now…

We live at the bottom of ‘ocean of elementary air’ [8].

The weight of that air exerts a pressure of approximately 100 000Pa (Pascals) [9]. To put that into context, 1Pa is 1 Newton of force per metre2. A regular 1kg bag of sugar has a weight of 10N, so the weight of air on a 1m2 surface is equivalent to the weight of 10 000 bags of sugar. That’s a lot.

And yet, despite this victory for science, there will always persist the misconception that ‘air has no weight’. This is simply a consequence of evolving at the bottom of a fluid filled gravitational well.

If you encounter any kids, or adults, who are under the misapprehension that ‘air has no weight’ then I strongly urge you to demonstrate the converse. The weight of air is easily demonstrated in the following manner;




[1] http://en.wikipedia.org/wiki/Horror_vacui_%28physics%29
[2] http://www.pepysdiary.com/archive/1664/02/01/
[3] http://en.wikipedia.org/wiki/Isaac_Beeckman
[4] http://www.encyclopedia.com/topic/Isaac_Beeckman.aspx
[5] http://en.wikipedia.org/wiki/Suction_pu ... acuum_pump
[6] http://en.wikipedia.org/wiki/Evangelist ... to_physics
[7] http://www.gap-system.org/~history/Biog ... celli.html
[8] http://en.wikipedia.org/wiki/Barometer# ... barometers
[9] http://en.wikipedia.org/wiki/Atmospheric_pressure
Scarlett and Ironclad wrote:Campermon,...a middle aged, middle class, Guardian reading, dad of four, knackered hippy, woolly jumper wearing wino and science teacher.
User avatar
RS Donator
Posts: 17443
Age: 52

United Kingdom (uk)
Print view this post

Re: A new RatSkep Science Writing Competition!

#5  Postby Mr.Samsa » Apr 11, 2012 12:04 am


Over the course of scientific history, countless ideas, hypotheses, and theories have been both born and discredited. In popular culture, the perception of this phenomenon is largely from an historical perspective, where the narrative of the demise of some great theory is told decades or centuries after the shift occurred. The advantage to this approach is that all the facts are known; the beginning, the end, the heroes, the villains, and so on, are all conveniently laid out in one simple coherent story. But the disadvantage of this approach, however, is that it can give the mistaken impression that scientific progress is neat and tidy, and that consensus is easily achieved simply by presenting the evidence. In order to show what happens ‘behind the scenes’ when an hypothesis is deemed to have failed, this essay will focus on an idea within choice theory that has not yet been discarded, but in which the evidence seems to be stacked against it. As such, the facts here will be messy, and the conclusion is not necessarily inevitable; a story without a clear ending, but hopefully the important underlying moral is distinguishable.

Numerous papers have investigated choice behaviour over the years, with many of them focusing on the generalised matching law (Baum, 1974). The generalised matching law (GML) was an extension of Herrnstein’s (1970) strict matching, and it suggests that organisms allocate the ratio of their responses as a function of the reinforcer ratio – with the addition of the variables ‘sensitivity to reinforcement’ and ‘bias’ to account for undermatching1. The GML has been successful at predicting choice behaviour in a number of settings but recently some inconsistencies have been found, such as an inability for it to account for concurrent VI EXT2 data, the observation that sensitivity is not independent of the overall reinforcer rate (Alsop & Elliffe, 1988) and that the sensitivity to reinforcer magnitude and delay are not independent of the absolute values of these variables (Logue & Chavarro, 1987). Other studies also found that the sensitivity to reinforcer magnitude is affected by overall reinforcer rate (Davison, 1988), that the relation between logs of behaviour ratios and logs of reinforcer magnitude ratios are not linear (Davison & Hogsden, 1984), and also that the sensitivity to relative reinforcer frequency was a function of the disparity between the alternative stimuli (Miller, Saunders, & Bourland, 1980).

Given these issues, and in particular the finding by Miller, Saunders, and Bourland (1980), an alternative equation was suggested by Davison and Jenkins (1985) who argued that choice behaviour is best described by the contingency-discriminability model (CDM) which assumes that there is an increasing degree of confusion over which alternative the reinforcer came from as stimuli become increasingly similar. The advantage of this approach is that even though it is difficult to determine its quantitative superiority over the GML (within the usual reinforcer ratio range), it has clear conceptual advantages in that the discriminability parameter carries specific predictions which the sensitivity to parameter lacks. After assessing the shortcomings of the GML, as well as examining the conceptual and quantitative (in the extreme ranges of choice) advantages of the CDM, this essay concludes that the GML should be replaced by the CDM as it is a superior predictor of choice.


The concept of matching was first outlined by Herrnstein (1970) who observed a consistent finding which suggested that the absolute rate of responding was proportional to the ratio of reinforcement – this was termed the strict matching law (as shown in Equation 1, in its ratio form, where B refers to response and R refers to reinforcement). It was given this name because it was assumed that there was a direct relationship between response ratios and reinforcement ratios; that is, it had a slope of 1 on a logarithmic scale. However, later studies discovered that organisms appeared to consistently undermatch which results in a slope less than 1. Baum (1974) extended on the strict matching law by adding the scaling parameters ‘sensitivity to reinforcement’ which measures the change in the rate of responding relative to the change in obtained reinforcer ratios, and ‘bias’ which accounts for a constant preference for one alternative over the other that is independent of reinforcer rates. This model was named the generalised matching law (as shown in Equation 2, and in its logarithmic form in Equation 2b, where ‘a’ refers to sensitivity to reinforcement and ‘c’ represents bias).

B1/B2 = R1/R2

B1/B2 = c(R1/R2)a

log (B1/B2 )= a log(R1/R2) + log c

Although a large body of research has demonstrated the GML to be an accurate predictor of choice (see Davison and McCarthy, 1988, for a review of some of the data on the matching law), some researchers have questioned the logical implications made by equation 2b. The immediate prediction of which is that choice between alternatives operates on a linear function, as equation 2b cannot account for any significant deviations from linearity. Another substantial problem for the GML is that it necessarily must predict exclusive choice when organisms are responding on a concurrent VI EXT schedule, however, this generally does not happen and the common finding is that organisms will tend to ‘sample’ the extinction alternative occasionally (for example, Davison and Hunter, 1976).

Perhaps a more serious issue for the GML is the interpretation of the sensitivity to reinforcement parameter. In the literature it is often referred to as a constant but it seems to vary in a number of circumstances, and sometimes in ways that seem contradictory to what we expect. It assumes that only the ratios are important for predicting choice, but it has been observed that overall reinforcer magnitudes affect the sensitivity to reinforcement (Alsop & Elliffe, 1988). The problem then, seems to be with the nature of a: as it is a hypothetical construct, it can seemingly be interpreted in numerous ways so that it almost becomes unfalsifiable. Davison and Jones (1995) argued that given all these factors, it is not reasonable to continue to treat the GML as a law, and for it to have any value the sensitivity to reinforcement parameter must be treated as a variable rather than a constant. They also suggested that it may be more appropriate to replace the GML with a model described by Davison and Jenkins (1985) called the contingency-discriminability model.


The conception of the model began with Miller, Saunders, & Bourland’s (1980) examination of choice behaviour in pigeons using a switching-key procedure (Findley, 1958) where the stimuli that had line orientations with 45, 15, or 0 degree disparity. For each stimulus disparity condition, the concurrent variable-interval schedules were varied between a numerous reinforcer ratios. The results showed that responding changed as a function of stimulus disparity – they matched in the highly disparate component with a =1.00 for 45-deg, but as the disparity decreased, their responding approached extreme undermatching with a = .28 and .37 for 15-deg, and a = .17 for 0-deg.

To account for this observation, Davison and Jenkins (1985) proposed the CDM which included a discrimination parameter like the one used in signal detection theory (Davison & Tustin, 1978). The model thus assumes that, in a concurrent schedule, the choice between two alternatives is not simply a choice between reinforcers but there is also a task in which the organism has to decide which response produced the last reinforcer. In other words, behaviour is not controlled by the reinforcer rate, but rather it is controlled by the apparent or perceived reinforcer rate. This can either be modelled in terms of discriminability, dr, (Equation 3) or as the inverse of discriminability – proportional confusion, p, (Equation 3b).

B1/B2 = c (dr R1+ R2)/(dr R2+ R1)

B1/B2 =c ((R1- pR1+ pR2)/(R2- pR2+ pR1))

In equation 3, discriminability (dr) can range from 0, indicating no discriminability, to infinity, which indicates perfect discriminability. On the other hand, confusion (p) ranges from 0, no confusion, to .5 which is complete confusion. Both equations, however, are functionally equivalent, although equation 3b may be preferable as the final calculations do not result in more reinforcers than were actually scheduled. Using Equation 3, where the parameter a is replaced with the discriminability parameter, Davison and Jenkins recalculated Miller, Saunders, & Bourland’s data – the results are reproduced in Table 1 below.

Table 1: Least Squares Estimates of the parameters of the GML (Equation 2) and CDM (Equation 3), using the data reported by Miller, Saunders, & Bourland (1980). (Taken from Davison and Jenkins, 1985).

These estimates suggest that the CDM could account for marginally more variance than the GML, and it provides sensible dr values, which is good evidence that the CDM is a better model than the GML. Davison and Jenkins did note that the dr values for the 0-deg condition were not zero, as we might have expected, however, the a values were not zero either so they hypothesised that this may have been the result of slight component discrimination of reinforcer rates according to some form of win-stay lose-shift strategy. Equation 3 should be preferred to Equation 2 as it predicts that any changes in the discriminability of the components will affect dr, which naturally predicts changes in a – an observation that cannot be accounted for by Equation 2.

A further advantage of the CDM is that it does not struggle to explain common results of concurrent VI EXT experiments. The GML necessarily predicts that responses should be exclusive on the VI alternative, however, no responses on the extinction alternative rarely occurs (Davison & Hunter, 1976). The CDM, on the other hand, can easily be adjusted to be consistent with these data; that is, if R2 was 0 in Equation 3 then it would reduce to:

B1/B2 =cdr

So the ratio is noninfinite according to Equation 4, unless the discriminability (dr) or bias (c) is infinite. This is consistent with the findings from concurrent VI EXT experiments as this means that the response ratio in stable schedules will be constant when the discriminability and bias are constant, and independent of reinforcer ratio. So when discriminability is not perfect, some responses will be misallocated to the extinction alternative. However, Todorov, Castro, Hanna, Bittencourt de Sa, and Barreto (1983) found that sensitivity to relative rates of reinforcers decreased as they became progressively more exposed to concurrent VI VI schedules which suggests that exclusive choice may be possible, given enough time for preferences to stabilise.

Davison and Jones (1995) examined this possibility when they investigated the effects of extending the reinforcer ratios up to 160 to 1, including a VI EXT condition. So this procedure not only tested the assumption of Todorov et al. (1983) but it also directly compared the predictions made by the GML and the CDM in extreme choice. They arranged concurrent VI VI schedules in which the log reinforcer ratios ranged between +3 and -3 and the alternatives were signalled by light intensity. The brighter alternative had a probability of being reinforced that varied from 0 to .9. The GML (Equation 2b) and the CDM (Equation 3b) were then fitted to the data for individual birds.

The results showed that the GML accounted for between 87- 99% variance and the sensitivity values were also lower than usually observed, ranging from .36 to .6 (see Figure 1). When the GML was fitted to less extreme values (-1 to 1) both the amount of variance increased (93-100%) and the sensitivity values increased to between .48 and .71 (see Figure 2).

Figure 1: Log response ratios as a function of obtained log reinforcer ratios for Bird 21-26. The straight line was the best fit using Equation 2b. (Taken from Davison and Jones, 1995).

Figure 2: Log response ratios for each subject as a function of log obtained reinforcer ratios. The line of best fit to Equation 2b by least squares linear regression to the five central data points with log reinforcer ratios between -1 and +1. (Taken from Davison and Jones, 1995).

Eighteen of the twenty four data points from outside the usual range (-1 to +1) fell significantly below the level of choice predicted by the GML. Residual values from the fits were consistently positive at more extreme negative log reinforcer ratios, but were consistently negative at the more extreme positive log reinforcer ratios (shown in Figure 3). These deviations cannot be predicted by the GML and they indicate a consistent deviation from matching at the extreme ranges of choice - suggesting a non-linear function.

Figure 3: The deviation of the obtained log response ratios from those predicted by the fitted lines in Figure 2 using the GML equation, as a function of the obtained log reinforcer ratios. (Taken from Davison and Jones, 1995).

Figure 4: The deviation of the obtained log response ratios from those predicted by the fitted lines in Figure 5 using the CDM equation, as a function of the obtained log reinforcer ratios. (Taken from Davison and Jones, 1995).

The CDM, on the other hand, can account for these systematic deviations from linearity (as shown in Figure 4 and 5). It accounted for between 97-99% of variance for data across all reinforcer ratios. For the group data, the confusion parameter, p, was .12 which means that 12% of reinforcers were misallocated. Davison and Jones (1995) concluded that the CDM was a better fit for the extreme choice data, because it was able to account for systematic deviations that occurred outside of the normal range of reinforcers. In addition, they found that in the VI EXT condition, there was a statistically nonsignificant trend (z = .09, p > .05) for the group, and this trend was also nonsignificant for each individual. This suggests that the observation made by Todorov et al. (1983), that sensitivity falls over prolonged exposure to extinction schedules, was not supported by the evidence.

Figure 5: Log response ratios as a function of obtained log reinforcer ratios, also showing the predictions of Equation 3b when fitted to the data. (Taken from Davison and Jones, 1995).


Even though the generalised matching law can account for the results obtained by various studies, it contains a number of inherent flaws which conclusively demonstrate that at the very least it is incomplete and at worst it is wrong. Factors such as its inability to account for nonexclusive choice on concurrent VI EXT schedules, its prediction of linearity despite clear S-curves as reinforcer ratios become more extreme, and its lack of interpretability with regards to the parameter, a, sensitivity to reinforcement. If there were no alternative to the GML, a reasonable argument could be made in favour of continuing its use but ever since Davison and Jenkins (1985) outlined the contingency-discriminability model, growing evidence is continually suggesting a move away from the GML.

As well as avoiding the shortcomings of the GML with regards to the linearity assumption, the CDM also has an advantage in that the main component of the equation has a clear and interpretable mechanism; that is, behaviour is allocated according to the apparent rate of reinforcement – organisms can become confused over where the reinforcer came from. The GML has no such mechanism as sensitivity to reinforcement is simply a mathematical description with an ad hoc explanation, rather than carrying specific explanatory and predictive value.

A number of studies have attempted to disprove the CDM but most, if not all, have suffered from similar misunderstandings of the theory. Fatal methodological flaws such as using highly disparate stimuli, or assessing results across the usual reinforcer rates, have resulted in the mistaken conclusion that the CDM’s nonlinearity assumption is incorrect (for example, Baum, Schwendiman, and Bell, 1999) – but as demonstrated by Davison and Jenkins (1985), under those conditions the two models make the same predictions and so we should not expect to see any differences. However, there are some serious difficulties that the CDM faces – that of concatenation and practicality (Davison and Nevin, 1999). Whilst concatenation is not a serious issue, in that the same problems also apply to the GML, it is still important to find a way in which it can be implemented to create a comprehensive model of choice behaviour.

In summary, the contingency-discriminability model has been demonstrated to be a superior model for explaining choice behaviour, compared to the generalised matching law, both conceptually and quantitatively. Further research is necessary to work out the finer details of the contingency-discriminability model to work it into a broader framework of behavioural theory but it appears to be a step closer to a fuller understanding of behaviour. As this data continues to accumulate, the picture becomes clearer and we can see how the current dominant choice theory (the GML) is slowly making way for the new model (the CDM). The GML has thus “failed” in the way scientific theories generally “fail”; not with a bang, but a whimper.


1: The observation that the rate of responding is less than the rate of reinforcement resulting in a less than maximal responses (that is, imperfect matching).

2: A concurrent “variable Interval – Extinction” procedure is when two schedules of reinforcement are set up to run simultaneously, with the former providing reinforcement after a random amount of time has passed since the last reinforcement was delivered, and the latter providing no reinforcement at all.


Baum, W. M. (1974). On two types of deviation from the matching law: Bias and undermatching. Journal of the Experimental Analysis of Behavior , 22, 231–242.

Baum, W. M., Schwendiman, J. W., & Bell, K. E. (1999). Choice, contingency discrimination, and foraging theory. Journal of the Experimental Analysis of Behavior , 71, 355-373.

Davison, M. (1988). Concurrent schedules: Interaction of reinforcer frequency and reinforcer duration. Journal of the Experimental Analysis of Behavior , 49, 339-349.

Davison, M., & Hogsden, I. (1984). Concurrent variable-interval schedule performance: Fixed versus mixed reinforcer durations. Journal of the Experimental Analysis of Behavior , 41, 169-182.

Davison, M., & Hunter, I. W. (1976). Performance on variable-interval schedules arranged singly and concurrently. Journal of the Experimental Analysis of Behavior , 25, 335-345.

Davison, M., & Jenkins, P. E. (1985). Stimulus discriminability, contingency discriminability, and schedule performance. Animal Learning & Behavior , 13, 77-84.

Davison, M., & Jones, B. M. (1995). A quantitative analysis of extreme choice. Journal of the Experimental Analysis of Behavior , 64, 147-162.

Davison, M., & McCarthy, D. (1988). The matching law: A research review. Hillsdale, NJ: Erlbaum.

Davison, M., & Nevin, J. (1999). Stimuli, Reinforcers, And Behavior: An Integration. Journal of the Experimental Analysis of Behavior , 71, 439–482.

Davison, M., & Tustin, R. D. (1978). The relation between the generalized matching law and signal-detection theory. Journal of the Experimental Analysis of Behavior , 29, 331–336.

Findley, J. D. (1958). Preference and switching under concurrent scheduling. Journal of the Experimental Analysis of Behavior , 1, 123-144.

Herrnstein, R. J. (1970). On the law of effect. Journal of the Experimental Analysis of Behavior , 13, 243–266.

Logue, A. W., & Chavarro, A. (1987). Effects on choice of absolute and relative values of reinforcer delay. Journal of Experimental Psychology: Animal Behavior Processes , 13, 280-291.

Miller, J. T., Saunders, S. S., & Bourland, G. (1980). The role of stimulus disparity in concurrently available reinforcement schedules. Animal Learning & Behavior , 8, 635-641.

Todorov, J. C., Castro, J. M., Hanna, E. S., Bittencourt de Sa, M. C., & Barreto, M. d. (1983). Choice, experience, and the generalized matching law. Journal of the Experimental Analysis of Behavior , 40, 99-111.
Posts: 11370
Age: 36

Print view this post

Return to General Science & Technology

Who is online

Users viewing this topic: No registered users and 1 guest