Posted: Apr 28, 2010 10:07 am
by Luis Dias
FACT-MAN-2 wrote:

So the scientists who stated the 95% confidence are dumber than you are, is that it? I rather think that's an unlikely situation.


Me too. Much more probably, they never stated the way you did, they just let you state it the way you are because it's "simpler" to the masses, and convincing.

They've been at this for 20 years, how long have you been at it? They know exactly what the characteristics and attributes are of their analytical basis and they understand their veracity.


Argument from authority? Already? Man that was easy.

They openly state the fact that their predictions are based on estimates of emissions that will occur between now and 2100, and it is obvious that if those estimates prove to be wrong by what emissions actually do in reality between now and then, their predictions will not be achieved and something else will happen instead. In other words, if actual emissions exceed their estimates for them, mean annual temp in 2100 will be higher than what they've predicted; if they are lower, then the mean annual temp in 2100 will be lower.


You make it sound as if the only assumption at play are the emissions. Ridiculous. There are hundreds of assumptions in the models, specifically of how sub-systems work, which are "sub"-models themselves, and there are hundreds of these, and how they interact with each other. These models must assume a lot (100% confidence implied mathematically), and then they output a wide array of results, which are then statistically analyzed and a narrow window of sigma 1 or 2 margin of error is created. This window is required, because all models would otherwise have no lower or upper limits of warming. This is the kind of 95% confidence these people are talking about.

There is simply no other way to handle this aspect of making a prediction. We dont know exactly what trajectory emissions will follow or exhibit, hence we have to estimate what we think they will follow, and that's done using historical emissions data, consensus economic forecasts, and other criteria that bear on what we might expect in terms of emissions. This then is then expressly stated as an assumption.

If you know of a better way to do this, I'm all ears.


It's not a question of "better". It's a question of outselling your goods. This is akin to say that you have a "95%" confidence that a never before tried treatment (and hinged on assumption upon assumption) will just cure cancer. If I utter a distrust upon this number and call it "ridiculous", to ask if I have a better answer to cure cancer is no comeback at all, just a failure at understanding the issue at play.

As for any "axioms" that may be in play, I think either 1) there aren't any


Oh fuck. :lol:

...or 2) if there are they too are treated with copious amounts of good judgement or analyses that determine their force of relevance and meaning in the equation. Do you actually think these fellows are going to publish a report that makes a predition that can easily be shot down because any or all or some of its underlying "axioms" are less than credible? WTF kind of scientist would do that? None that I know of.


What do you mean "less than credible"? Do you understand the concept of graceful degradation? Do you understand what happens when you start multiplying several high degrees of confidence (not 100%) on sub-systems to get a "meta" confidence?

Let's spend some time here. Imagine a case scenario, much much simpler than GCMs. Imagine a Model (M) that integrates 10 sub-models. These sub-models are the result of "seriouusss" physics work. They are not derivative from thermodynamics directly, but observed and treated statistically with robust results. Every single one of these sub-models have a high degree of confidence, say 95%. Question, what is the degree of confidence we should put into the bigger M if we assume that the interaction between these sub-models is perfect, flawless? 95%? Not even close. The "real" result will depend upon the interactions involved, but if the 10 sub-models are equally important, then it's a simple matter of probabilities.

You just have to multiply 95% ten times, which will give you a result of 59%. And this is considering that the bigger M only consists of a "perfect" alignment of sub-models. No. The bigger M is itself a Model of interactions between sub-models. The resulting confidence is even lower.

This is why "multi-models" are used. They don't trust any single model. But is this enough? Is creating a lot of low-confidence models "equal" to create a high-confidence model? Of course not. But yes, it's "the best we got". We agree on that.


Just don't oversell it.

You make these assertions because you know we can't get into all those details here and so that makes them easy targets for claiming we have to take them (or the scientists are taking them) as "absolute truths" or "100% true representations of the world."


It's a basic observation, FM. The fact that you are unnerved by it is of no consequence to me at all.

But the proof is in the pudding, a 787 airliner flies and flies well and predictably, which proves that any and all "axioms" and assumptions that underpin the science and engineering by which a 787 is built are indeed true representations of the world.


Geee. You don't know much about engineering or science philosophy now do you? What is a "true representation of the world"? Something that works. Airplanes have been around for 100 years, and for those whole years they have been tested and tested and worked upon. Yes, we know fairly well how the wind works in the fuselage and the wings. To compare this with the complexity of GCMs is silly.

In climate science, the modelers have managed to make very good predictions to date, which proves the validity of whatever "axioms" that may underly their science.


Double wrong. First, they did not make "very good predictions". They got some right, they got some wrong. That does not count as "very good" to me. In airplanes, if you make "some wrong" prediction, the plane crashes. Fail. Then, the right results do not "prove the validity" of the axioms. Utter intellectual rubbish. If it were true that the models performed "admirably" (which they didn't btw), then it would only show that they hadn't been disproved, falsified.

Luis Dias wrote:
But there's a problem here. These assumptions are called "models".

False. Models are mathematical representations of real phenomena. When run backward and the output they produce matches actual data, we can have a high degree of confidence that the output they produce when run forward does indeed represent what's going to happen.


"Real phenomena". There's an oxymoron for you... Either things are "real" or they are "phenomena". Take your pick. Oh, but perhaps I'm being too technical here... nevermind. Your "false" sentence is awkwardly placed, since you only confirm what I say. The assumptions are models, even if we have a "high degree of confidence", which is a placement for "cross your fingers". Correlation is not causation, and to create a model that superimposes itself neatly on past data is not impressive, at all. All modelers know this.

You need to take modeling 101.


No I don't. You otoh...

I repeat, you need to take modeling 101.


Yes, you repeat a lot of shit.

The models have been under development for 25 years or more. You talk like they were invented yesterday.


25 years is not even a period that is definable in the IPCC as "climate". This means that any models invented through this period could have never been really tested with really unknown and unpredictable data, such as the future data. Worse, when we can finally make this assessment (in 5 years time), we can only do with the first models, which are already off by some degrees of magnitude. Such assessment will be ignored, rightly, since modelling was "perfected" ever since. But that means that verification and falsification will still not occur for at least 15 years.

Someone here needs to read Popper. Pronto.

It's pretty easy for an armchair commentator to make such pronouncements.


And you are what here, precisely?

The problem is they run counter to the conclusions reached by some 20,000 professional climate researchers and scientists and just about every climatologist on the planet. I trust you do know this.


No, I do not know this. Nor do you, or you would make a good case for it. Mind you, I'm being very specific. Ask the climatologists who are not directly connected with the modelling staff, what degree of confidence do they place unto models. You'll find a fascinating answer. This does not mean they do not think GW is not a problem.

To hear you tell it, we should just give up the whole endeavor and let it go at that and to hell with trying to learn what the future might have in store for us. Fortunately, others don't share this idea, they keep trying, they keep working.


What do you know about their ideas? You talk as if you know their minds. But you do not. You only know what some climate blogs feed you. And they work? Wow. That is impressive. OMG.

Luis Dias wrote:
Finally there's the issue of averaging multi-model runs, as if they were "all equally good", after dismissing others because they gave too much warming or too little!

It isn't done this way and you should know better than to claim it is.


Yes, yes it was. It was done so in the AR4.

Luis Dias wrote:
After all of this, to still utter the silly proclamation of "95%" chance of this going to happen like X and Y without the multi-tonne weight of caveats annexed to it is disingenuous and misleading. Fortunately, science is catching up to this common sense basic notion. The next IPCC report will address this issue better than the one we have now, although I still think it won't address it fully honestly. If it did, we would be brought back to the early 90s conclusion, and that would be politically catastrophic, if we want to have any nation doing something "about it".

You're stating the obvious, which is, climate science gets better as time passes and more work is done. This can be expresed as gaining better resolution as time passes.


Thing is, as climate science is getting better, the uncertainty will increase. That's kind of unexpected, innit? Unless one has oversold the product in the first place. Just ask Trenberth. He has some good articles on this and what is to be expected in the next assessment.

The idea is we work with what we have.


Just. Don't. Oversell. It.

You're giving out straws, FM. I never said otherwise.

But we have some problems here. When you oversell a product, we have problems. Care to think about what they are?

It is true for example that if we had a temperature gauge located on every 2,500 square meters of the planet, including on the oceans and in polar regions, and these gauges were all identical and set up identically, we'd have little trouble meauring the temperature of the planet as a mean average, no problem at all. But we don't enjoy that kind of instrumentation, we only have partial coverage of the planet and are therefore required to develop and establish some very sophisticated ways of using the instrumentation we do have to obtain reasobaly accurate determination of the earth's mean annual temperture, aided now days by satellite measuring.

Exhaustive efforts have been made to determine a good paleo record of earth's temperature so that we have a reasonably accurate history from which to extrapolate the future. Is it perfect to the nth degree? No, of course it isn't. But it is good enough for what we're trying to achieve and it gets better with each passing year


What are you "trying to achieve" exactly?

But here's the thing, we know from how much C02 is in the atmosphere right now that earth's mean annual temperature is going to rise over time, we know this as well a we know the sun's gonna come up in the morning. So the question becomes, how much is it going to rise? We can infer that from how much C02 is in the atmosphere too and the rate at which it is accumulating there. That's chemistry and physics and the known behavior of GHGs. These kinds of studies and analyses also point to a rise in earth's mean annual temperature in the same range the model predicts, somewhere between 2 and 7C degrees in the year 2100. Wow, whatta ya know, two completely different approaches yield essentialy the same results.


You talk as if they were independent of each other... I mean wow. The models assume those physical behaviors, d'oh!

You appear to think you know better how all this should be done than the professionals who are actually doing the work. In that light, what would you propose as a better means of projecting future trends or events or expressing them or describing them? Or are you of the mind that we ought to just give it all up and stop trying?


Stop overselling them, is what I propose. What they are "doing" should be checked in a better way. Right now as it is, it's basically an unfalsifiable thematic, where 30 years haven't even passed since the creation of the first simplistic and probably very false models. To test the models to the data to which they could have been fudged in the first place is a practice known to have its troubles, let's hear the 2008 financial crisis and the econ models, let's have enron, let's have the 2001 bubble all over again.

It's scientifically fragile, and what these people are asking us is to "trust" them, that they aren't "bad" guys, they are good guys, therefore they won't "fudge" the models to create the impression they are very good at "predicting" what they already know about the past. Of course, one can decide to "trust" them, to trust their deontology and stuff, their "disinterest", etc. But then we are saying that the models are right because the people that made them are saints. This is not science. Good science never depends upon "trust", but rather upon criticism. But as I said, you cannot "criticize" the models, for enough time has not passed to give them a proper falsification test.

I'm all ears.


That would be anatomically comical.