Posted: Mar 14, 2012 12:35 am
by VazScep
mizvekov wrote:
I see :)
But I am trying to understand why it is the way it is. For instance, why is it not something like the below, which I think is a bit simpler:
Code: Select all
module Propositional (Theorem, Term(..), axiom1, axiom2, mp) where

infixr 8 :=>:

data Term a = Var a
            | Term a :=>: Term a
            | Not (Term a)
            deriving (Show,Eq,Ord)

newtype Theorem a = Theorem (Term a) deriving (Eq, Ord, Show)

axiom1 :: Term a -> Term a -> Theorem a
axiom1 p q = Theorem (p :=>: q :=>: p)

axiom2 :: Term a -> Term a -> Term a -> Theorem a
axiom2 p q r = Theorem ((p :=>: q :=>: r) :=>: (p :=>: q) :=>: (p :=>: r))

axiom3 :: Term a -> Term a -> Theorem a
axiom3 p q = Theorem ((Not p :=>: Not q) :=>: q :=>: p)

mp :: Eq a => Theorem a -> Theorem a -> [Theorem a]
mp (Theorem (p :=>: q)) (Theorem p') = [Theorem q | p == p']
Yes, this is an excellent question. In fact, this is precisely the textbook definition of propositional logic, and was my first attempt at an implementation. It makes for a much simpler and elegant logical kernel. Specifically, you have made the axioms truly schematic. In my implementation, they are only "somewhat" schematic.

But consider proving a theorem such as, say, "P & Q -> Q & P" (see the Utils module to see how conjunction can be defined). Now, what would P and Q be? In most presentations of propositional logic, you start schematic and you stay schematic, so you'd have a function like:

Code: Select all
conjComm p q :: Term a -> Term a -> Theorem a


This function would take two terms p and q and prove the theorem p & q -> q & p. To implement it, you'll eventually end up using the functions axiom1 and axiom2 and axiom3 and passing in various terms derived from p and q.

Just as the function axiom1 is a schematic axiom, the function conjComm is a schematic proof. It produces a different proof for every combination of p and q. Now suppose the proof is thousands of lines long (in this case, it isn't, but for theorems in industrial strength provers, you're frequently running to tens of millions). In that case, "Term 0 & Term 1 -> Term 1 & Term 0" requires running a few thousand lines of proof, while "Term 1 & Term 0 -> Term 0 & Term 1" requires running another few thousand lines of proof. That's bad news in terms of efficiency.

So the alternative is to fake the schema with a simple instantiation function. Now you can prove "Term 0 & Term 1 -> Term 1 & Term 0" and then use "inst ((-) 1)" to switch between the two without having to rerun the whole proof.

I regard this as one of the differences between templates and type polymorphism. C++ templates are schematic in the first sense. ML-style polymorphism is "somewhat" schematic in the second sense. Neither is more expressive than the other (unless you start talking about loading new stuff at runtime without access to the compiler --- see my earlier example with Ocaml). But which one you want depends on how you're using them. The talk of explosions during template expansion is analogous to the explosion of proofs you'd get with your simpler logical kernel.

It is worth mentioning, I think, that Church's original paper on the Simple Theory of Types treated type variables schematically. It was Hindley and Milner who introduced ML-style polymorphism, which you add to simple type theory by providing a type instantiation rule. It doesn't buy you anything in terms of the expressive power of the logic, but it prevents explosions, and it's the route I followed in propositional logic.

Both ideas are brought together if you look at the inference rules of the theorem prover I use. There are two instantiation rules: one for variables, and one for types.

And then your example from Mathematical Logic becomes:
Code: Select all
*Propositional> :{
*Propositional| do
*Propositional|   step1 <- mp (axiom2 p (p:=>:p) p) (axiom1 p (p:=>:p))
*Propositional|   return $ mp step1 (axiom1 p p)
*Propositional| :}
[[Theorem (Var "p" :=>: Var "p")]]
But yeah. This is the way you'd do it with the simpler kernel.

VazScep wrote:Yeah, I remember all those tricks, including using identation and closing the bracket as soon as you open it.
But the problem is, most nesting is done with just the parentheses, and this is a major pitfall as opposed to say, most other languages where the little nesting you have is usually a mix of different types of structures, and a lot of those have differing open and close brackets.
So say, {([[({})]])} is a lot easier to make sense than (((((())))))
Yeah, I can see the basic logic; I just can't identify with it. Both look like line noise to me. :grin:

You are saying that the important thing isn't the number of delimiters, but the variety of them. But in Haskell and ML, round brackets are basically the only delimiter. I see ML and Haskell structure mostly in the indentation, syntax highlighting, and blocking of code. You get all of that in Lisp too. At its worse, Lisp just has too many parentheses, which could be removed by imposing precedence rules, but I think that would make it harder for me to write macros.

VazScep wrote:
:grin:
I was more into ratpoison/gnu screen/vim for a while.
Still use the latter two on the linux box, but sadly getting ratpoison to work with modern applications is an uphill battle.
Haha! I think we have a lot in common here. I better look into this ratpoison.

VazScep wrote:Yes, I remember reading some of his essays.
I liked a lot his one on time, which can be found here: http://naggum.no/lugm-time.html
Sadly, I only came to know about him when he died.
This one passed me by somehow. I'll check it out now. :cheers: