Posted: Jul 01, 2015 1:57 am
by Leucius Charinus
Spearthrower wrote:
Leucius Charinus wrote: *snip*

Anyway that's something I put together some time ago but looks like it fits the discussion of the OP.

Criticisms are welcomed.




Great post! :cheers:

It's 2am, and I expect my brain's not really going to explain this well, but the one key element that I feel lacking in your model is something indicating that the process continues looping around and around over the years and generations as new evidence comes to light.


Thanks for making that point. It's probably not clear from the diagram but the process being described is supposed to be completely dynamic. That is, it is iterative. Change one hypothesis and the hypothetical conclusion could change. Add one new item of evidence, allocate hypotheses about that item, and then rerun the process again. ETC ETC ETC. The schematic was designed to represent a computer simulation and as such is dynamic. That was the intention.


The 'theoretical conclusions' don't exist in a bubble, isolated from the past, and nor, really, do the items of evidence; they exist within a tradition or a paradigm where 'theoretical conclusions' will tend to amass and be consilient with each other, but a new discovery can throw them all out or transform them beyond recognition.


I totally agree that the theoretical conclusions can and do influence the selection of hypotheses one chooses to be associated with the evidence items. This aspect of the process is not really made explicit in the schematic of the process, and perhaps it should be. Theoretical conclusions obviously - via consilience - will ultimately begin to influence researchers to select those hypotheses (associated with the evidence) which generate known conclusions. The question then needs to be asked whether there is also a process of consilience in the selection of hypotheses. And there obviously are many such examples of consensus.

And yes, the schematic does allow the entry of new evidence and the mandatory creation of hypotheses to represent this new evidence in the system before the entire set (along with the new hypotheses about new evidence) is submitted to the process of deriving a new hypothetical conclusion.

While the model obviously hopes to simplify this chaotic function of thousands of human minds over generations, perhaps a simple arrow from theoretical conclusions back to the evidence would be enough to suggest that the value of these conclusions are dependent on future corroborating or falsifying evidence and, as such, are all provisional.


I will take that observation on board because the process described is completely iterative - change one hypothesis and rerun, tweak another hypothesis and rerun, introduce a new hypothesis on old evidence, or a new hypothesis about new evidence and rerun.


I guess the basic question is this: Should the hypothetical conclusions feed back into the series of hypotheses being used as input. In theory probably not. But in practice, they obviously can and do feed back. This is how I see it at the moment. Some people for example could begin the process with a hypothetical conclusion and then examine how the hypotheses might be arranged and defined in order to reach such a conclusion. Is this still doing history?



Thanks for your comments.