Posted: Mar 13, 2012 11:06 pm
VazScep wrote:
mizvekov wrote:I have one question though, is it of any utility here to do something like this:
Code: Select all
`inst (\v -> if v == "p" then (theoremTerm (inst (\v -> Var v) axiom3)) else Var "b") axiom1`

Ie getting the term of a theorem (using theoremTerm) and then using that to instantiate another axiom?
Note that
Code: Select all
`inst (\v -> Var v)`
is just the identity.

Yeah I know, but that wasn't what I meant. I used it just so the code would be shorter and not detract the attention from the point.
I am asking if it is any useful to use a Theorem as a Term to instantiate an axiom.
More specifically, the part in bold below:
Code: Select all
`inst (\v -> if v == "p" then [b2](theoremTerm (inst (\v -> Var v) axiom3))[/b2] else Var "b") axiom1`

And how it is used as a Term to instantiate axiom1.

VazScep wrote:
It wouldn't surprise me if "inst" turned out to be useful. In this style of theorem prover, there is a lot of bootstrapping. You build a load of machinery on top of the kernel and then completely forget about the basic inference rules. This doesn't mean that the inference rules are useless. It just means that I haven't played with them much!

I see
But I am trying to understand why it is the way it is. For instance, why is it not something like the below, which I think is a bit simpler:
Code: Select all
`module Propositional (Theorem, Term(..), axiom1, axiom2, mp) whereinfixr 8 :=>:data Term a = Var a            | Term a :=>: Term a            | Not (Term a)            deriving (Show,Eq,Ord)newtype Theorem a = Theorem (Term a) deriving (Eq, Ord, Show)axiom1 :: Term a -> Term a -> Theorem aaxiom1 p q = Theorem (p :=>: q :=>: p)axiom2 :: Term a -> Term a -> Term a -> Theorem aaxiom2 p q r = Theorem ((p :=>: q :=>: r) :=>: (p :=>: q) :=>: (p :=>: r))axiom3 :: Term a -> Term a -> Theorem aaxiom3 p q = Theorem ((Not p :=>: Not q) :=>: q :=>: p)mp :: Eq a => Theorem a -> Theorem a -> [Theorem a]mp (Theorem (p :=>: q)) (Theorem p') = [Theorem q | p == p']`

And then your example from Mathematical Logic becomes:
Code: Select all
`*Propositional> :{*Propositional| do*Propositional|   step1 <- mp (axiom2 p (p:=>:p) p) (axiom1 p (p:=>:p))*Propositional|   return \$ mp step1 (axiom1 p p)*Propositional| :}[[Theorem (Var "p" :=>: Var "p")]]`

VazScep wrote:I just never really bought that. For starters, no serious Lisp user would dream of Lisp hacking without a facility to highlight matching parentheses. (go to Options->Paren Match Highlighting in Emacs). And with decent syntax highlighting and appropriate indentation, I never found readability an issue in Lisp. That's not cheating: every language cares about appropriate indentation and syntax highlighting, and I always have paren-match highlighting turned on when hacking in ML or Haskell.

Yeah, I remember all those tricks, including using identation and closing the bracket as soon as you open it.
But the problem is, most nesting is done with just the parentheses, and this is a major pitfall as opposed to say, most other languages where the little nesting you have is usually a mix of different types of structures, and a lot of those have differing open and close brackets.
So say, {([[({})]])} is a lot easier to make sense than (((((())))))

VazScep wrote:
In fact, when I'm tallying my shopping lists or budgeting, I tend to use Lisp so that I can write in this style:

Code: Select all
`(+ (* 2 37) ; beans    400      ;detergent    127      ; meat    1000    ; extremely expensive cuisine that helps make this look less of a student's shopping list)`

and because of Lisp's syntax, I can put my cursor at the end of subexpressions such as (* 2 37) and evaluate just that part by hitting C-x C-e.

That is pretty cool, yes.

VazScep wrote:
This is probably not much of an argument. I am, after all, stupid enough to think that Emacs and xmonad make me substantially more productive than someone using the latest gimmicks from Apple.

I was more into ratpoison/gnu screen/vim for a while.
Still use the latter two on the linux box, but sadly getting ratpoison to work with modern applications is an uphill battle.

VazScep wrote:
Well, that's the sort of argument I was up against. The problem is that it, even as someone who has ditched Lisp and never looked back, this doesn't match my experience at all. I just never had the sort of problems people complain about, and so my suspicion is that the complaints can be explained in other ways (such as a simple case of not getting used to it). On the other hand, I don't see any empirical evidence (or any clue how you would reliably gather it) that Lisp really is difficult for humans to read.

Well, getting used to it can also be used to explain why someone thinks something very hard is very easy.
But anyway, I also have no idea how to evaluate that.

VazScep wrote:
It is also worth mentioning that John McCarthy never intended Lisp programmers to write s-expressions directly. The s-expressions were supposed to just represent the parse trees, and an alternative m-expression syntax would be built over the top. In questioning Lisp's design choice here, it is always worth remembering that when given the choice, the Lisp users favoured the s-expressions over the m-expressions, and so the m-expression syntax was forever scrapped.

Now that is funny, almost ironic

VazScep wrote:So long as I'm sober, I can probably keep my emotions under control

VazScep wrote:On religious flame wars, I thought I should mention Erik Naggum, a Lisp hacker and major usenet contributor whose polemical style rivals Christopher Hitchens. I was reading Erik's posts back when I first got interested in Lisp. They were always a treat and generally contained awesome ideas phrased in his characteristically aggressive style. The online Lisp community felt quite depressing when Naggum stopped posting due to ill health. He died in 2009 at the age of 44.

Can Lisp do what Perl does easily?
On XML

Yes, I remember reading some of his essays.
I liked a lot his one on time, which can be found here: http://naggum.no/lugm-time.html