Posted: May 30, 2019 8:34 pm
by Cito di Pense
BWE wrote:but so far there has not been a case where a directly inconsistent behavior has been discovered.


Maybe that is because the observations we make are not of propositions. Maybe you're remarking on some other point; it's just not clear to me what your remark is about. As far as is obvious, the LNC is about propositions.

Yes, we make propositions about our observations and treat them as hypotheses or theories, but I think there are very good reasons why we don't end up with propositions of this type that directly contradict one another. I don't know. In your line of work, it may come up, but those details are for you to provide. You're waxing philosophical about "directly inconsistent behavior".

There are opinions that directly contradict each other, even coming from the same person, but we know a little about what causes that, and don't take it all that seriously. Or maybe we do and don't take it seriously. You know how that goes.

BWE wrote:that suggests something kind of profound about models. Why do they work so well?


Watch out for deepities, BWE. It's hard to show that deepity is not just something happening between your ears. Bend a spoon. It's already been touched on in this thread that there's a parallel to the anthropic principle in your question. When stuff doesn't work well, or stops working well, we throw it away or file it under "less adequate approximation". Why do models work? Yeah, we don't want tunas with good taste, we want tunas that taste good.