Programming

on fundamental matters such as existence, knowledge, values, reason, mind and ethics.

Moderators: kiore, Blip, The_Metatron

Re: Programming

#81  Postby stalidon » Mar 13, 2012 3:41 pm

VazScep wrote:Generally, you're better off on Linux. In fact, the only reason I started using Linux and Emacs in the first place was because I was getting into Lisp.


Ok, maybe cygwin as a compromise? I'll have emacs with that. Or do you know any acceptable Lisp 'IDE' for win32?

PS: Once I got it installed, how would I go about creating pink fractal flowers?
-
stalidon
 
Posts: 145
Age: 48
Male

Country: Argentina
Argentina (ar)
Print view this post

Re: Programming

#82  Postby VazScep » Mar 13, 2012 3:57 pm

stalidon wrote:
VazScep wrote:Generally, you're better off on Linux. In fact, the only reason I started using Linux and Emacs in the first place was because I was getting into Lisp.


Ok, maybe cygwin as a compromise?
If you want to start downloading Lisp libraries with the standard packaging tool (asdf-install), you won't have a choice. You need Cygwin to get it to work.

I'll have emacs with that. Or do you know any acceptable Lisp 'IDE' for win32?
Corman Lisp has a full IDE. The two commercial Lisps (Allegro and Lispworks) both have IDEs complete with GUI builders. You should be able to get a month trial on each.

I've only used Allegro, and in some senses, the Allegro IDE is a state-of-the-art GUI in a state-of-the-art Lisp. It's based on the same model that was used at Xerox Parcs when designing the original graphical IDE (in particular, the entire IDE is just a bunch of classes and functions that are part of a saved runtime image --- I'd love to expand on that, because it's such a cool idea).

PS: Once I got it installed, how would I go about creating pink fractal flowers?
(create-fractal 'pink 'flower)
Here we go again. First, we discover recursion.
VazScep
THREAD STARTER
 
Posts: 4590

United Kingdom (uk)
Print view this post

Re: Programming

#83  Postby VazScep » Mar 13, 2012 9:55 pm

On religious flame wars, I thought I should mention Erik Naggum, a Lisp hacker and major usenet contributor whose polemical style rivals Christopher Hitchens. I was reading Erik's posts back when I first got interested in Lisp. They were always a treat and generally contained awesome ideas phrased in his characteristically aggressive style. The online Lisp community felt quite depressing when Naggum stopped posting due to ill health. He died in 2009 at the age of 44.

Can Lisp do what Perl does easily?
On XML
Here we go again. First, we discover recursion.
VazScep
THREAD STARTER
 
Posts: 4590

United Kingdom (uk)
Print view this post

Re: Programming

#84  Postby mizvekov » Mar 13, 2012 11:06 pm

VazScep wrote:
mizvekov wrote:I have one question though, is it of any utility here to do something like this:
Code: Select all
inst (\v -> if v == "p" then (theoremTerm (inst (\v -> Var v) axiom3)) else Var "b") axiom1

Ie getting the term of a theorem (using theoremTerm) and then using that to instantiate another axiom?
Note that
Code: Select all
inst (\v -> Var v)
is just the identity.

Yeah I know, but that wasn't what I meant. I used it just so the code would be shorter and not detract the attention from the point.
I am asking if it is any useful to use a Theorem as a Term to instantiate an axiom.
More specifically, the part in bold below:
Code: Select all
inst (\v -> if v == "p" then [b2](theoremTerm (inst (\v -> Var v) axiom3))[/b2] else Var "b") axiom1

And how it is used as a Term to instantiate axiom1.

VazScep wrote:
It wouldn't surprise me if "inst" turned out to be useful. In this style of theorem prover, there is a lot of bootstrapping. You build a load of machinery on top of the kernel and then completely forget about the basic inference rules. This doesn't mean that the inference rules are useless. It just means that I haven't played with them much!

I see :)
But I am trying to understand why it is the way it is. For instance, why is it not something like the below, which I think is a bit simpler:
Code: Select all
module Propositional (Theorem, Term(..), axiom1, axiom2, mp) where

infixr 8 :=>:

data Term a = Var a
            | Term a :=>: Term a
            | Not (Term a)
            deriving (Show,Eq,Ord)

newtype Theorem a = Theorem (Term a) deriving (Eq, Ord, Show)

axiom1 :: Term a -> Term a -> Theorem a
axiom1 p q = Theorem (p :=>: q :=>: p)

axiom2 :: Term a -> Term a -> Term a -> Theorem a
axiom2 p q r = Theorem ((p :=>: q :=>: r) :=>: (p :=>: q) :=>: (p :=>: r))

axiom3 :: Term a -> Term a -> Theorem a
axiom3 p q = Theorem ((Not p :=>: Not q) :=>: q :=>: p)

mp :: Eq a => Theorem a -> Theorem a -> [Theorem a]
mp (Theorem (p :=>: q)) (Theorem p') = [Theorem q | p == p']


And then your example from Mathematical Logic becomes:
Code: Select all
*Propositional> :{
*Propositional| do
*Propositional|   step1 <- mp (axiom2 p (p:=>:p) p) (axiom1 p (p:=>:p))
*Propositional|   return $ mp step1 (axiom1 p p)
*Propositional| :}
[[Theorem (Var "p" :=>: Var "p")]]


VazScep wrote:I just never really bought that. For starters, no serious Lisp user would dream of Lisp hacking without a facility to highlight matching parentheses. (go to Options->Paren Match Highlighting in Emacs). And with decent syntax highlighting and appropriate indentation, I never found readability an issue in Lisp. That's not cheating: every language cares about appropriate indentation and syntax highlighting, and I always have paren-match highlighting turned on when hacking in ML or Haskell.

Yeah, I remember all those tricks, including using identation and closing the bracket as soon as you open it.
But the problem is, most nesting is done with just the parentheses, and this is a major pitfall as opposed to say, most other languages where the little nesting you have is usually a mix of different types of structures, and a lot of those have differing open and close brackets.
So say, {([[({})]])} is a lot easier to make sense than (((((())))))

VazScep wrote:
In fact, when I'm tallying my shopping lists or budgeting, I tend to use Lisp so that I can write in this style:

Code: Select all
(+ (* 2 37) ; beans
    400      ;detergent
    127      ; meat
    1000    ; extremely expensive cuisine that helps make this look less of a student's shopping list
)


and because of Lisp's syntax, I can put my cursor at the end of subexpressions such as (* 2 37) and evaluate just that part by hitting C-x C-e.

That is pretty cool, yes.

VazScep wrote:
This is probably not much of an argument. I am, after all, stupid enough to think that Emacs and xmonad make me substantially more productive than someone using the latest gimmicks from Apple.

:grin:
I was more into ratpoison/gnu screen/vim for a while.
Still use the latter two on the linux box, but sadly getting ratpoison to work with modern applications is an uphill battle.

VazScep wrote:
Well, that's the sort of argument I was up against. The problem is that it, even as someone who has ditched Lisp and never looked back, this doesn't match my experience at all. I just never had the sort of problems people complain about, and so my suspicion is that the complaints can be explained in other ways (such as a simple case of not getting used to it). On the other hand, I don't see any empirical evidence (or any clue how you would reliably gather it) that Lisp really is difficult for humans to read.

Well, getting used to it can also be used to explain why someone thinks something very hard is very easy.
But anyway, I also have no idea how to evaluate that.

VazScep wrote:
It is also worth mentioning that John McCarthy never intended Lisp programmers to write s-expressions directly. The s-expressions were supposed to just represent the parse trees, and an alternative m-expression syntax would be built over the top. In questioning Lisp's design choice here, it is always worth remembering that when given the choice, the Lisp users favoured the s-expressions over the m-expressions, and so the m-expression syntax was forever scrapped.

Now that is funny, almost ironic :)

VazScep wrote:So long as I'm sober, I can probably keep my emotions under control :drunk:

:thumbup:

VazScep wrote:On religious flame wars, I thought I should mention Erik Naggum, a Lisp hacker and major usenet contributor whose polemical style rivals Christopher Hitchens. I was reading Erik's posts back when I first got interested in Lisp. They were always a treat and generally contained awesome ideas phrased in his characteristically aggressive style. The online Lisp community felt quite depressing when Naggum stopped posting due to ill health. He died in 2009 at the age of 44.

Can Lisp do what Perl does easily?
On XML

Yes, I remember reading some of his essays.
I liked a lot his one on time, which can be found here: http://naggum.no/lugm-time.html
Sadly, I only came to know about him when he died.
mizvekov
 
Posts: 314
Age: 39
Male

Brazil (br)
Print view this post

Re: Programming

#85  Postby VazScep » Mar 14, 2012 12:35 am

mizvekov wrote:
I see :)
But I am trying to understand why it is the way it is. For instance, why is it not something like the below, which I think is a bit simpler:
Code: Select all
module Propositional (Theorem, Term(..), axiom1, axiom2, mp) where

infixr 8 :=>:

data Term a = Var a
            | Term a :=>: Term a
            | Not (Term a)
            deriving (Show,Eq,Ord)

newtype Theorem a = Theorem (Term a) deriving (Eq, Ord, Show)

axiom1 :: Term a -> Term a -> Theorem a
axiom1 p q = Theorem (p :=>: q :=>: p)

axiom2 :: Term a -> Term a -> Term a -> Theorem a
axiom2 p q r = Theorem ((p :=>: q :=>: r) :=>: (p :=>: q) :=>: (p :=>: r))

axiom3 :: Term a -> Term a -> Theorem a
axiom3 p q = Theorem ((Not p :=>: Not q) :=>: q :=>: p)

mp :: Eq a => Theorem a -> Theorem a -> [Theorem a]
mp (Theorem (p :=>: q)) (Theorem p') = [Theorem q | p == p']
Yes, this is an excellent question. In fact, this is precisely the textbook definition of propositional logic, and was my first attempt at an implementation. It makes for a much simpler and elegant logical kernel. Specifically, you have made the axioms truly schematic. In my implementation, they are only "somewhat" schematic.

But consider proving a theorem such as, say, "P & Q -> Q & P" (see the Utils module to see how conjunction can be defined). Now, what would P and Q be? In most presentations of propositional logic, you start schematic and you stay schematic, so you'd have a function like:

Code: Select all
conjComm p q :: Term a -> Term a -> Theorem a


This function would take two terms p and q and prove the theorem p & q -> q & p. To implement it, you'll eventually end up using the functions axiom1 and axiom2 and axiom3 and passing in various terms derived from p and q.

Just as the function axiom1 is a schematic axiom, the function conjComm is a schematic proof. It produces a different proof for every combination of p and q. Now suppose the proof is thousands of lines long (in this case, it isn't, but for theorems in industrial strength provers, you're frequently running to tens of millions). In that case, "Term 0 & Term 1 -> Term 1 & Term 0" requires running a few thousand lines of proof, while "Term 1 & Term 0 -> Term 0 & Term 1" requires running another few thousand lines of proof. That's bad news in terms of efficiency.

So the alternative is to fake the schema with a simple instantiation function. Now you can prove "Term 0 & Term 1 -> Term 1 & Term 0" and then use "inst ((-) 1)" to switch between the two without having to rerun the whole proof.

I regard this as one of the differences between templates and type polymorphism. C++ templates are schematic in the first sense. ML-style polymorphism is "somewhat" schematic in the second sense. Neither is more expressive than the other (unless you start talking about loading new stuff at runtime without access to the compiler --- see my earlier example with Ocaml). But which one you want depends on how you're using them. The talk of explosions during template expansion is analogous to the explosion of proofs you'd get with your simpler logical kernel.

It is worth mentioning, I think, that Church's original paper on the Simple Theory of Types treated type variables schematically. It was Hindley and Milner who introduced ML-style polymorphism, which you add to simple type theory by providing a type instantiation rule. It doesn't buy you anything in terms of the expressive power of the logic, but it prevents explosions, and it's the route I followed in propositional logic.

Both ideas are brought together if you look at the inference rules of the theorem prover I use. There are two instantiation rules: one for variables, and one for types.

And then your example from Mathematical Logic becomes:
Code: Select all
*Propositional> :{
*Propositional| do
*Propositional|   step1 <- mp (axiom2 p (p:=>:p) p) (axiom1 p (p:=>:p))
*Propositional|   return $ mp step1 (axiom1 p p)
*Propositional| :}
[[Theorem (Var "p" :=>: Var "p")]]
But yeah. This is the way you'd do it with the simpler kernel.

VazScep wrote:Yeah, I remember all those tricks, including using identation and closing the bracket as soon as you open it.
But the problem is, most nesting is done with just the parentheses, and this is a major pitfall as opposed to say, most other languages where the little nesting you have is usually a mix of different types of structures, and a lot of those have differing open and close brackets.
So say, {([[({})]])} is a lot easier to make sense than (((((())))))
Yeah, I can see the basic logic; I just can't identify with it. Both look like line noise to me. :grin:

You are saying that the important thing isn't the number of delimiters, but the variety of them. But in Haskell and ML, round brackets are basically the only delimiter. I see ML and Haskell structure mostly in the indentation, syntax highlighting, and blocking of code. You get all of that in Lisp too. At its worse, Lisp just has too many parentheses, which could be removed by imposing precedence rules, but I think that would make it harder for me to write macros.

VazScep wrote:
:grin:
I was more into ratpoison/gnu screen/vim for a while.
Still use the latter two on the linux box, but sadly getting ratpoison to work with modern applications is an uphill battle.
Haha! I think we have a lot in common here. I better look into this ratpoison.

VazScep wrote:Yes, I remember reading some of his essays.
I liked a lot his one on time, which can be found here: http://naggum.no/lugm-time.html
Sadly, I only came to know about him when he died.
This one passed me by somehow. I'll check it out now. :cheers:
Here we go again. First, we discover recursion.
VazScep
THREAD STARTER
 
Posts: 4590

United Kingdom (uk)
Print view this post

Re: Programming

#86  Postby mizvekov » Mar 14, 2012 2:57 am

VazScep wrote:Yes, this is an excellent question. In fact, this is precisely the textbook definition of propositional logic, and was my first attempt at an implementation. It makes for a much simpler and elegant logical kernel. Specifically, you have made the axioms truly schematic. In my implementation, they are only "somewhat" schematic.

But consider proving a theorem such as, say, "P & Q -> Q & P" (see the Utils module to see how conjunction can be defined). Now, what would P and Q be? In most presentations of propositional logic, you start schematic and you stay schematic, so you'd have a function like:

Code: Select all
conjComm p q :: Term a -> Term a -> Theorem a


This function would take two terms p and q and prove the theorem p & q -> q & p. To implement it, you'll eventually end up using the functions axiom1 and axiom2 and axiom3 and passing in various terms derived from p and q.

Just as the function axiom1 is a schematic axiom, the function conjComm is a schematic proof. It produces a different proof for every combination of p and q. Now suppose the proof is thousands of lines long (in this case, it isn't, but for theorems in industrial strength provers, you're frequently running to tens of millions). In that case, "Term 0 & Term 1 -> Term 1 & Term 0" requires running a few thousand lines of proof, while "Term 1 & Term 0 -> Term 0 & Term 1" requires running another few thousand lines of proof. That's bad news in terms of efficiency.

So the alternative is to fake the schema with a simple instantiation function. Now you can prove "Term 0 & Term 1 -> Term 1 & Term 0" and then use "inst ((-) 1)" to switch between the two without having to rerun the whole proof.

Yeah I see it now. I had been assuming that the only purpose of inst and friends was to instantiate the axioms, but now I see that you can use it to reinstantiate whole proofs. In this sense, there are really no non-schematic proofs, all proofs can be used as a schema to instantiate something else at least as complex as.

VazScep wrote:
I regard this as one of the differences between templates and type polymorphism. C++ templates are schematic in the first sense. ML-style polymorphism is "somewhat" schematic in the second sense. Neither is more expressive than the other (unless you start talking about loading new stuff at runtime without access to the compiler --- see my earlier example with Ocaml). But which one you want depends on how you're using them. The talk of explosions during template expansion is analogous to the explosion of proofs you'd get with your simpler logical kernel.

Yeah I see that now :)

VazScep wrote:
It is worth mentioning, I think, that Church's original paper on the Simple Theory of Types treated type variables schematically. It was Hindley and Milner who introduced ML-style polymorphism, which you add to simple type theory by providing a type instantiation rule. It doesn't buy you anything in terms of the expressive power of the logic, but it prevents explosions, and it's the route I followed in propositional logic.

Both ideas are brought together if you look at the inference rules of the theorem prover I use. There are two instantiation rules: one for variables, and one for types.

I see, I will take a look at that.
Any more interesting tidbits of related Haskell code? :grin:

VazScep wrote:Yeah, I can see the basic logic; I just can't identify with it. Both look like line noise to me. :grin:

You are saying that the important thing isn't the number of delimiters, but the variety of them. But in Haskell and ML, round brackets are basically the only delimiter. I see ML and Haskell structure mostly in the indentation, syntax highlighting, and blocking of code. You get all of that in Lisp too. At its worse, Lisp just has too many parentheses, which could be removed by imposing precedence rules, but I think that would make it harder for me to write macros.

Well, I meant the huge number of parentheses was only a facet of the problem.
If you say there is still one delimiter in Haskell, I agree, but the thing here is that it has a lot of other features that help you avoid running into a gazillion of them. Operator precedence like you said, the '$' and '.' operators go a long way helping here, 'if then else', 'do' and others also help.


VazScep wrote:Haha! I think we have a lot in common here. I better look into this ratpoison.

It is a lot simpler than any other window manager I ever saw. It basically is to X what gnu screen is to a terminal.
That it is simple goes a long way to explain why it is a pain in the ass to get it to work with newer applications, it implements almost no workarounds.
I also used wmii later, which works much better, and xmonad has always been in my 'to look at' list, but right now, with all the work and stuff, I need something which just works without much fiddling around, and that's why I have been using basically whatever the distro throws at me.

VazScep wrote:This one passed me by somehow. I'll check it out now. :cheers:

Sure, but this one is quite longish :grin:
mizvekov
 
Posts: 314
Age: 39
Male

Brazil (br)
Print view this post

Re: Programming

#87  Postby VazScep » Mar 14, 2012 10:51 pm

mizvekov wrote:I see, I will take a look at that.
Any more interesting tidbits of related Haskell code? :grin:
Well, I've got quite a lot of code built on top of this. But I'll try to focus on just a couple.

Like I said a few posts back, the problem with the proof style here is that you can't make assumptions during a proof, and then later discharge them. To fix that, I wrote a tiny proof language:

Code: Select all
module Bootstrap where

import Propositional
import Utils

import Data.List

-- Proof terms with assumptions. In comments, we write Γ ⊢ P to denote a proof with
-- conclusion P and assumptions Γ. Proofs are not guaranteed to be valid, and are
-- pushed through the kernel via verify.
data Proof = Assume (Term String)
           | UseTheorem (Theorem String)
           | MP Proof Proof
            deriving Show


This is a common idea in Haskell and ML. You use a data-type to define the syntax of an embedded language. We now have two languages: the language of propositional terms, and a language of proofs. The equivalent idea in Java or C# (and C++?) would be to define an inheritance hierarchy rooted at a "Proof" interface, and with implementing classes for each data constructor. You'd then use the Visitor pattern to write things such as evaluation functionality. That's a lot of code. In ML and Haskell, things are much simpler. You use algebraic data-types and evaluators are just functions which pattern-match on that data-type. Incidentally, this was the raison d'etre for algebraic data-types: to define new languages. Indeed, ML is a language for defining languages: a MetaLanguage (no pun, that's what it stood for).

I use this pattern a lot because it's so lightweight.

Now we add some semantics. Here's some code to extract all the assumptions and the conclusion of a proof:

Code: Select all
sequent :: Proof -> ([Term String], Term String)
sequent (Assume a)   = ([a], a)
sequent (UseTheorem t) = ([], theoremTerm t)
sequent (MP pr pr')    = let (asms,c) = sequent pr' in
  case sequent pr of
    (asms', p :=>: q) | p == c    -> (nub $ sort $ asms ++ asms', q)
                      | otherwise -> error "MP"
    _ -> error "MP"

concl :: Proof -> Term String
concl = snd . sequent

assms :: Proof -> [Term String]
assms = fst . sequent


That's the easy bit. The hard step is to write a function which mirrors the Deduction Theorem. This function transforms a proof into a new proof in which you have made one less assumption. In other words, it is a function which discharges an assumption from a proof. I cribbed the idea for the implementation by basing it on the standard inductive proof of the theorem (inductive base case becomes recursive base case, while inductive step becomes the recursive call):

Code: Select all
-- ⊢ P → P
truth :: Theorem String
truth = let step1 = instL [p, p :=>: p] axiom1
            step2 = instL [p, p :=>: p, p] axiom2
            step3 = head $ mp step2 step1
            step4 = instL [p, p] axiom1 in
        head $ mp step3 step4

discharge :: Term String -> Proof -> Proof
discharge asm pr = let inAsm = elem asm (assms pr) in d pr
    where d pr@(Assume t) | t == asm  = UseTheorem (instL [t] truth)
                          | otherwise =
                              MP (UseTheorem (instL [concl pr,asm] axiom1)) pr
          d pr@(UseTheorem t)
              = MP (UseTheorem (instL [concl pr,asm] axiom1)) pr
          d (MP imp p)
              = let p'           = concl p
                in case concl imp of
                  _ :=>: r' ->
                      MP (MP (UseTheorem (instL [asm,p',r'] axiom2)) (d imp)) (d p)
                  _         -> error "MP"


With this function, we can now write the most important function: one which takes an object of our proof language and, if the proof is valid, spits out the proven theorem.

Code: Select all
verify :: Proof -> Theorem String
verify proof =
  let v (UseTheorem t) = t
      v (MP pr pr')    = case mp (v pr) (v pr') of
        []     -> error "MP"
        (t':_) -> t'
  in case assms proof of
    []  -> v proof
    [a] -> v (discharge a proof)
    as  -> error errorMsg
      where errorMsg = "Undischarged assumptions:\n" ++
                       unlines [ "  " ++ printTerm a | a <- as ]


And now proofs are much easier to write. For example, the theorem "⊢ P → P", which took four lines when we used the kernel directly, can be produced simply with

Code: Select all
verify (Assume p)


Here's another simple example:

Code: Select all
foo = let step1 = Assume p
          step2 = Assume (p :=>: q)
          step3 = MP step2 step1
          step4 = discharge (p :=>: q) step3
      in verify step4


So we now have a new proof language which allows us to prove theorems in propositional logic which are still verified by our secure logical kernel. Our kernel was so simple as to be almost useless for proof, but we can build more convenient tools over the top without having to break the encapsulation of that secure kernel.

For a real challenge, you might try one of the exercises from Mendelson, where you have to prove "⊢ (¬P → Q) -> (¬P → ¬Q) → P".

I should probably stop there, but I think I should say something about the next stage. One of the things I wanted to do was to implement a language for equational reasoning. That way, if you can prove that one proposition is logically equivalent to another (say that ¬¬P ↔ P), then you can substitute occurrences of the left hand side with the right hand side.

The language we implement for this is called a "conversions" language. It is not based on an algebraic data-type, but is instead based on combinators. Combinators are just higher-order functions composed in a "point-free" style, so that you don't see the underlying inputs and outputs. There are interesting theoretical distinctions between these two styles of language. One language is defined by a syntax and is then given an explicit semantics. The other is defined directly by its semantics.

Code: Select all
-- Basic conversions and conversionals

module Conversions where

import Bootstrap hiding (matchMP)
import Propositional
import Utils
import Control.Monad.Error

-- Conversions wrap functions which take terms and derive equations. Each
-- equation has an lhs which is the original term.
newtype Conv a = Conv (Term a -> [Theorem a])

applyC :: Conv a -> Term a -> [Theorem a]
applyC (Conv c) term = c term

-- Use a conversion to rewrite a theorem
convMP :: Conv String -> Theorem String -> [Theorem String]
convMP c thm = do eq  <- applyC c (theoremTerm thm)
                  imp <- matchMP eqMP eq
                  mp imp thm

-- Apply a conversion to the antecedent of a conditional
antC :: Conv String -> Conv String
antC (Conv c) = Conv f
  where f (p :=>: q) = c p >>= matchMP substLeft'
          where substLeft' = instM (Var . avoid (vars q)) [("p",q)] substLeft
        f _          = []
 
-- Apply a conversion to the consequent of a conditional       
conclC :: Conv String -> Conv String
conclC (Conv c) = Conv f
  where f (p :=>: q) = c q >>= matchMP substRight'
          where substRight' = instM (Var . avoid (vars p)) [("p",p)] substRight
        f _          = []

-- Apply a conversion to the body of a negation
notC :: Conv String -> Conv String
notC (Conv c) = Conv f
  where f (Not p) = c p >>= matchMP substNot
        f _       = []
       
-- Attempt the left conversion, and if it fails, attempt the right
orC :: Conv a -> Conv a -> Conv a
orC (Conv c) (Conv c') = Conv f
  where f t = c t `mplus` c' t
       
-- Try all conversions which succeed
tryC :: [Conv a] -> Conv a
tryC = foldr orC failC

-- Apply a conversion based on the term to be converted
ifC :: (Term a -> Bool) -> Conv a -> Conv a
ifC p c = Conv (\t -> if p t then applyC c t else [])

-- Apply one conversion after another
thenC :: Conv String -> Conv String -> Conv String
thenC c c' = Conv f
  where f t = do thm   <- applyC c t
                 r     <- rhs (theoremTerm thm)
                 thm'  <- applyC c' r
                 thm'' <- matchMP trans thm
                 matchMP thm'' thm'
        rhs (Not ((p :=>: q) :=>: Not (r :=>: s))) | p == s && q == r = return q
        rhs _  = []
       
-- The zero for orConv and identity for thenConv: always succeeds
allC :: Conv String
allC = Conv (\t -> return $ instL [t] reflEq)

-- The identity for orConv and zero for thenConv: always fails
failC :: Conv a
failC = Conv (const [])

-- Sequence conversions
everyC :: [Conv String] -> Conv String
everyC = foldl thenC allC
       
isImp :: Term a -> Bool
isImp (p :=>: q) = True
isImp _          = False

isNeg :: Term a -> Bool
isNeg (Not p) = True
isNeg _       = False

-- Apply a conversion to the first applicable term on a depth first search
depthConv1 :: Conv String -> Conv String
depthConv1 c = tryC [ c
                    , ifC isNeg (notC c')
                    , ifC isImp (antC c' `orC` conclC c') ]
  where c' = depthConv1 c
This is a pretty rich combinator language already, with some nice algebraic structure, but the goal here is to be able to write lots of new more sophisticated conversions by composing the primitive conversions. It's kind of overkill for propositional logic, but when you're working in higher-order theories, you want to use this sort of language to build powerful rewrite based automation and domain specific rewriting automation.
Last edited by VazScep on Mar 14, 2012 11:10 pm, edited 1 time in total.
Here we go again. First, we discover recursion.
VazScep
THREAD STARTER
 
Posts: 4590

United Kingdom (uk)
Print view this post

Re: Programming

#88  Postby VazScep » Mar 14, 2012 11:06 pm

mizvekov wrote:Well, I meant the huge number of parentheses was only a facet of the problem.
If you say there is still one delimiter in Haskell, I agree, but the thing here is that it has a lot of other features that help you avoid running into a gazillion of them. Operator precedence like you said, the '$' and '.' operators go a long way helping here, 'if then else', 'do' and others also help.
Funnily enough, I have heard Lisp programmers say that Haskell looks like line-noise, and they've got a point in as much as Haskell programmers love things like "$", ".", ">>=", ">=>", ">>", "+++", "<++", and so on and so on. Personally, I disagree that this is line-noise. It's mathematics, and I think it's fucking beautiful.

I can't remember who it was, but I recall one Lisper saying that he doesn't see brackets. I think I was the same.

It is a lot simpler than any other window manager I ever saw. It basically is to X what gnu screen is to a terminal.
That it is simple goes a long way to explain why it is a pain in the ass to get it to work with newer applications, it implements almost no workarounds.
I also used wmii later, which works much better, and xmonad has always been in my 'to look at' list, but right now, with all the work and stuff, I need something which just works without much fiddling around, and that's why I have been using basically whatever the distro throws at me.
Minimalism isn't especially my thing. It's just that ever since I learned how to touchtype, I learned to hate my mouse. I want to be able to do almost everything from the keyboard, and xmonad has that down.

I want an Emacs for the 21st century. I'm not holding my breath.
Here we go again. First, we discover recursion.
VazScep
THREAD STARTER
 
Posts: 4590

United Kingdom (uk)
Print view this post

Re: Programming

#89  Postby mizvekov » Mar 15, 2012 2:48 am

Okay thanks, it's going to take a lot of time to go through all of the above, please be patient :grin:

VazScep wrote:Funnily enough, I have heard Lisp programmers say that Haskell looks like line-noise, and they've got a point in as much as Haskell programmers love things like "$", ".", ">>=", ">=>", ">>", "+++", "<++", and so on and so on. Personally, I disagree that this is line-noise. It's mathematics, and I think it's fucking beautiful.

Agreed :)

VazScep wrote:Minimalism isn't especially my thing. It's just that ever since I learned how to touchtype, I learned to hate my mouse. I want to be able to do almost everything from the keyboard, and xmonad has that down.

Well, the biggest advantage was that I could turn a very underpowered laptop into a pretty snappy one, which was very usable for the kind of stuff I was doing at the time, but not anymore.
wmii is still very simple compared to the major desktops, but it seems to be on par featurewise with xmonad, at least from a quick glance on youtube.

VazScep wrote:I want an Emacs for the 21st century. I'm not holding my breath.

Wat? Emacs is not already developing itself into a new paradigm for the 21st century? :grin:
mizvekov
 
Posts: 314
Age: 39
Male

Brazil (br)
Print view this post

Re: Programming

#90  Postby VazScep » Mar 15, 2012 4:58 pm

mizvekov wrote:Wat? Emacs is not already developing itself into a new paradigm for the 21st century? :grin:
On reflection, my comment was pretty stupid. If anything, I want an Emacs for the 1980s. Here's a famous rant about how Emacs compares to similar tech on Lisp machines:

"It's kind of hard to appreciate the differences from reading a description. It's even hard to appreciate it from using Zmacs. Where the light dawns is when you've been using Zmacs for a while and go back to using plain old Emacs.

What, you mean there's no keystroke to bring up a list of every change I've made in every file on the box? What, you mean there's code on the box whose source I can't pop up with a keystroke? What, you mean I have to run some sort of tags program on source files before I can find definitions? What, you mean there's code on the box that isn't cross-referenced? What, you mean there's running code on the box whose source I can't step into? What, you mean I can't insert references to objects on the screen into my code just by clicking the screen objects?

Zmacs is tightly integrated with Genera, and it's Lisp all the way down to the microcode. Emacs is great, don't get me wrong, but it's at a different remove from the system."


This is the sort of thing that makes me sad. It's not just that things used to be so much better. It's that we've collectively forgotten our history, which contains technology that is several grades more advanced than anything developed in the last two decades. Instead, we find ourselves getting excited about the latest technology from Microsoft, even though it doesn't compare in functionality to stuff three decades old. Joel Spolsky lamented the point when he talked about how Microsoft were revolutionising their filesystem search facility, so that you'd be able to access files in just a few keystrokes (something we've had in UNIX since the early 80s). And there was a particularly poignant post on usenet some time ago, I think by Kent Pitman, worrying that we'd lost something important by the simplicity of filesystem functionality brought on by the dominance of UNIX and Windows.

Technology is great, as any iphone user will gush. But as a programmer, I feel as if most things have been going backwards for the last few decades.
Here we go again. First, we discover recursion.
VazScep
THREAD STARTER
 
Posts: 4590

United Kingdom (uk)
Print view this post

Re: Programming

#91  Postby stalidon » Mar 15, 2012 5:05 pm

VazScep wrote:Technology is great, as any iphone user will gush. But as a programmer, I feel as if most things have been going backwards for the last few decades.


You've been reading Fabian Pascal diatribes, haven't you? I told you!
-
stalidon
 
Posts: 145
Age: 48
Male

Country: Argentina
Argentina (ar)
Print view this post

Re: Programming

#92  Postby VazScep » Mar 15, 2012 5:06 pm

stalidon wrote:
VazScep wrote:Technology is great, as any iphone user will gush. But as a programmer, I feel as if most things have been going backwards for the last few decades.


You've been reading Fabian Pascal diatribes, haven't you? I told you!
I haven't no. But I'm gonna start!

I should couch my pessimism. There are plenty of advances. Haskell is cutting-edge as a language, and way more advanced than Common Lisp, but its implementations do extremely poorly in terms of developer tools.
Here we go again. First, we discover recursion.
VazScep
THREAD STARTER
 
Posts: 4590

United Kingdom (uk)
Print view this post

Re: Programming

#93  Postby mizvekov » Mar 15, 2012 5:41 pm

VazScep wrote:This is the sort of thing that makes me sad. It's not just that things used to be so much better. It's that we've collectively forgotten our history, which contains technology that is several grades more advanced than anything developed in the last two decades. Instead, we find ourselves getting excited about the latest technology from Microsoft, even though it doesn't compare in functionality to stuff three decades old. Joel Spolsky lamented the point when he talked about how Microsoft were revolutionising their filesystem search facility, so that you'd be able to access files in just a few keystrokes (something we've had in UNIX since the early 80s). And there was a particularly poignant post on usenet some time ago, I think by Kent Pitman, worrying that we'd lost something important by the simplicity of filesystem functionality brought on by the dominance of UNIX and Windows.

Technology is great, as any iphone user will gush. But as a programmer, I feel as if most things have been going backwards for the last few decades.

But then, setting aside here the failure of LISP machines to dominate the market, why do you think the LISP programming language is slipping away from popularity? Do you think it was mostly because of chance, or that it had some fatal flaw that was never fixed?

VazScep wrote:I should couch my pessimism. There are plenty of advances. Haskell is cutting-edge as a language, and way more advanced than Common Lisp, but its implementations do extremely poorly in terms of developer tools.

What do you miss the most in terms of developer tools for haskell?
mizvekov
 
Posts: 314
Age: 39
Male

Brazil (br)
Print view this post

Re: Programming

#94  Postby VazScep » Mar 15, 2012 6:17 pm

mizvekov wrote:But then, setting aside here the failure of LISP machines to dominate the market, why do you think the LISP programming language is slipping away from popularity?
Well, that's a huge can-of-worms right there. I've heard that the popular theory for why it almost died out completely in the early 90s was because it was so heavily tied to AI and the Lisp machine, both of which went under because of the "AI winter". I think the more interesting question is why Lisp came back to life in the 2000s.

Do you think it was mostly because of chance, or that it had some fatal flaw that was never fixed?
I'm not sure what it would mean to have a "fatal flaw" other than a flaw which leads it to die out. And I'm not sure that's the right way to look at things. I mean, I think Java and Python are full of flaws, but it doesn't seem to affect their popularity.

VazScep wrote:What do you miss the most in terms of developer tools for haskell?
1) Refactoring tools. I want to be able to intelligently replace identifiers, add arguments to functions, fix name-clashes, import new functions, automatically add deriving clauses and so on and so on, with a few keystrokes or mouse presses.
2) The ability to develop entire systems at the toplevel and save a runtime image (as it is, even my interpreter bindings are erased when I reload a file).
3) A rich reflection API, with all types, kinds, modules, declarations and docstrings available as runtime objects, including cross-references to source files. This is the real enabler. It makes things such as Common Lisp's/Clojure's SLIME mode possible, it allows for intelligent autocompletion without having to continually parse code and generate TAGS files. It also means you can autocomplete definitions entered at the toplevel, as with SLIME. And it means that programmers can easily write their own developer tools.
4) The ability to query the type inferencer to ask what the type of an expression is in a partially written function definition. Visual Studio does this brilliantly for single identifiers with F# and it's a massive help to me. Poly/ML in JEdit does it for arbitrary expressions.
Here we go again. First, we discover recursion.
VazScep
THREAD STARTER
 
Posts: 4590

United Kingdom (uk)
Print view this post

Re: Programming

#95  Postby mizvekov » Mar 15, 2012 7:54 pm

VazScep wrote:Well, that's a huge can-of-worms right there. I've heard that the popular theory for why it almost died out completely in the early 90s was because it was so heavily tied to AI and the Lisp machine, both of which went under because of the "AI winter". I think the more interesting question is why Lisp came back to life in the 2000s.

Right.

VazScep wrote:I'm not sure what it would mean to have a "fatal flaw" other than a flaw which leads it to die out. And I'm not sure that's the right way to look at things. I mean, I think Java and Python are full of flaws, but it doesn't seem to affect their popularity.

Yeah that came out badly. What I meant was, in more broad terms, what kind of mistake was made so that it would start losing popularity. My own experience in the matter seem to be that they are not making it look interesting to CS students, as it happened to me, but that could be very biased. But the fact is, none of my peers became interested even slightly in LISP.
But why is that? Bad PR? is LISP specially hard to teach to the inexperienced?

VazScep wrote:1) Refactoring tools. I want to be able to intelligently replace identifiers, add arguments to functions, fix name-clashes, import new functions, automatically add deriving clauses and so on and so on, with a few keystrokes or mouse presses.
2) The ability to develop entire systems at the toplevel and save a runtime image (as it is, even my interpreter bindings are erased when I reload a file).
3) A rich reflection API, with all types, kinds, modules, declarations and docstrings available as runtime objects, including cross-references to source files. This is the real enabler. It makes things such as Common Lisp's/Clojure's SLIME mode possible, it allows for intelligent autocompletion without having to continually parse code and generate TAGS files. It also means you can autocomplete definitions entered at the toplevel, as with SLIME. And it means that programmers can easily write their own developer tools.
4) The ability to query the type inferencer to ask what the type of an expression is in a partially written function definition. Visual Studio does this brilliantly for single identifiers with F# and it's a massive help to me. Poly/ML in JEdit does it for arbitrary expressions.

Yeah, those would be nice, but how for example 3) fit with compiler implementations of Haskell? Besides there being no standardization for many of the needed things, it seems it would tie the hands of the optimizer even more.
I gather that all of those except number 3 could be done without a new Haskell standard.
mizvekov
 
Posts: 314
Age: 39
Male

Brazil (br)
Print view this post

Re: Programming

#96  Postby VazScep » Mar 15, 2012 8:46 pm

mizvekov wrote:Yeah that came out badly. What I meant was, in more broad terms, what kind of mistake was made so that it would start losing popularity. My own experience in the matter seem to be that they are not making it look interesting to CS students, as it happened to me, but that could be very biased. But the fact is, none of my peers became interested even slightly in LISP.
But why is that? Bad PR? is LISP specially hard to teach to the inexperienced?
I have no idea. There are many reasons that I could see factoring into the rise and fall of programming languages, but it all looks like a chaotic system to me, a bit like trying to explain the rise and fall of viral videos. I've got a couple of anecdotes of my own though.

I was completely inspired by my university's brilliant presentation of the even more obscure language Smalltalk, which used the most impressive learning environment I've seen yet, called LearningWorks. Since I graduated, they ditched Smalltalk and migrated the course over to Java, thereby killing development of the LearningWorks environment. They cited industry pressure as the reason for the migration. Employers want Java developers. Students want to be employed. People want to learn the language everyone else is using. Popularity breeds popularity.

My brother, as head of IT at his school, is in charge of preparing the A-level (for ages 16-18) in computer science. He's not a programmer, so he asked me which language he should learn and teach. I told him to use Python. Now I think that Python is a shit programming language, but I also know that Google are heavily invested in it, that universities are becoming more invested in it, and that there is a huge wealth of educational resources and libraries available for it. I'm making an active contribution to the popularity of a language that I myself despise.

But generally, I think I'd feel more confident explaining the subprime mortgage crisis than I would explaining the popularity or lack of popularity of a given programming language. All I will suggest is that if I did try to explain this stuff, I'd hardly ever talk about merit.

VazScep wrote:Yeah, those would be nice, but how for example 3) fit with compiler implementations of Haskell? Besides there being no standardization for many of the needed things, it seems it would tie the hands of the optimizer even more.
I gather that all of those except number 3 could be done without a new Haskell standard.
3 wouldn't need to be standardised. It would just be about providing a reflection API. Ultimately, I'm just talking about having a lot more of this. I'd imagine having a library where all loaded modules come with metadata that I can query, so I can query what the module exports, what the types of its toplevel values are, which file it was compiled from, line numbers, and metadata about modules it references.

I'm not sure how this plays with optimisation, but Java and the .NET languages have a lot of this stuff, as does Poly/ML. It might be that GHC is much more aggressive and there are subtle issues here, but I can't see them. The metadata should be separable from the data itself.
Here we go again. First, we discover recursion.
VazScep
THREAD STARTER
 
Posts: 4590

United Kingdom (uk)
Print view this post

Re: Programming

#97  Postby mizvekov » Mar 16, 2012 2:55 am

VazScep wrote:I have no idea. There are many reasons that I could see factoring into the rise and fall of programming languages, but it all looks like a chaotic system to me, a bit like trying to explain the rise and fall of viral videos. I've got a couple of anecdotes of my own though.

Well, I didn't mean for you to go that far and try to have a complete understanding of all that happened. I was simply asking for what kind of problems LISP had that you experienced to have a negative impact on adoption.

VazScep wrote:I was completely inspired by my university's brilliant presentation of the even more obscure language Smalltalk, which used the most impressive learning environment I've seen yet, called LearningWorks. Since I graduated, they ditched Smalltalk and migrated the course over to Java, thereby killing development of the LearningWorks environment. They cited industry pressure as the reason for the migration. Employers want Java developers. Students want to be employed. People want to learn the language everyone else is using. Popularity breeds popularity.

Yeah, that's the kind of answer I was looking for, but still, I don't think that's the complete picture. Smalltalk got the axe here probably also because your university was not using it academically, I mean beyond as a teaching tool to undergraduates. The guys doing their masters and phds were probably not finding it as interesting anymore.
Do you think that was also because of industry pressure?

VazScep wrote:My brother, as head of IT at his school, is in charge of preparing the A-level (for ages 16-18) in computer science. He's not a programmer, so he asked me which language he should learn and teach. I told him to use Python. Now I think that Python is a shit programming language, but I also know that Google are heavily invested in it, that universities are becoming more invested in it, and that there is a huge wealth of educational resources and libraries available for it. I'm making an active contribution to the popularity of a language that I myself despise.

True enough, also guilty of that charge with regards to php.

VazScep wrote:
But generally, I think I'd feel more confident explaining the subprime mortgage crisis than I would explaining the popularity or lack of popularity of a given programming language. All I will suggest is that if I did try to explain this stuff, I'd hardly ever talk about merit.

I am not such a pessimist here, I think academia has some influence on the future of programming languages, and It seems, at least to me, to be the case that LISP simply started losing appeal there.

VazScep wrote:3 wouldn't need to be standardised. It would just be about providing a reflection API. Ultimately, I'm just talking about having a lot more of this. I'd imagine having a library where all loaded modules come with metadata that I can query, so I can query what the module exports, what the types of its toplevel values are, which file it was compiled from, line numbers, and metadata about modules it references.

I thought you were talking about runtime introspection.

VazScep wrote:
I'm not sure how this plays with optimisation, but Java and the .NET languages have a lot of this stuff, as does Poly/ML. It might be that GHC is much more aggressive and there are subtle issues here, but I can't see them. The metadata should be separable from the data itself.

Well, if you just are interested in the metadata with regards to external interfaces, then I don't see any issues either.
mizvekov
 
Posts: 314
Age: 39
Male

Brazil (br)
Print view this post

Re: Programming

#98  Postby VazScep » Mar 16, 2012 10:23 am

mizvekov wrote:Well, I didn't mean for you to go that far and try to have a complete understanding of all that happened. I was simply asking for what kind of problems LISP had that you experienced to have a negative impact on adoption.
I don't have any such experience. My experience over the last 10 years is that Lisp has gone from being a dead language to a fully rejuvenated one, albeit taking the form of the excellent Lisp dialect, Clojure. Despite being only a few years old, I see 3 Clojure books on Amazon, with another soon to be published by O'Reilly (despite O'Reilly having an official policy to avoid publishing Lisp books). The newsgroup comp.lang.lisp, formally a Common Lisp only group (Schemers were redirected to comp.lang.scheme), now appears to be a Clojure group.

The more appropriate question to ask, I think, is what things have suddenly got people interested in Lisp again.

VazScep wrote:Yeah, that's the kind of answer I was looking for, but still, I don't think that's the complete picture.
I didn't expect it to be. I have no idea what the complete picture is or how to go about finding it. I was just relating you my own university experience.

Smalltalk got the axe here probably also because your university was not using it academically, I mean beyond as a teaching tool to undergraduates. The guys doing their masters and phds were probably not finding it as interesting anymore.
Do you think that was also because of industry pressure?
I doubt it. I did my degree with the Open University. They're much more of a vocational teaching institution than a research one.

There was a long discussion at my current university that I was invited to participate in, discussing a replacement for the first-year Java course. No-one mentioned any research constraints. All the issues were teaching related. The main constraint was the software engineering course, a mandatory taught course in the fourth year, which is really only suitable for a Java-like language: it is based heavily around things such as UML, code-refactoring tools and GUI development. Moreover, the project involves the students choosing and contributing to a large open source project, which means that there has to be a lot of big projects out there (>100K lines) to choose from. Python, the main contender as a replacement, couldn't meet those requirements. Besides, it wasn't considered "mainstream" enough to teach.

All of these issues seem to me to be entirely circular.

VazScep wrote:I am not such a pessimist here, I think academia has some influence on the future of programming languages, and It seems, at least to me, to be the case that LISP simply started losing appeal there.
I couldn't say. I think it's too university dependent. Edinburgh is obsessed with type-theory, but I'm not sure how many other universities are. A friend of mine is doing his PhD at Imperial College in London, and says that everyone's into Prolog down there. Now I thought Prolog was completely dead.

But even in interactive theorem proving, Common Lisp is still well-used in the form of ACL2. Several papers were presented on it at ITP last year, and several of the researchers here use it. Since it's developed at Austin, Texas, my guess is that their "Lisp and Symbolic Computation" course is likely to be pretty good.

VazScep wrote:I thought you were talking about runtime introspection.
I am, I think, but only in interpreted code (you already have full RTTI at the interpreter, but you can't get those types as Haskell values without using Typeable).

I don't like the idea of having full RTTI in a final build, and not just because of performance issues. It makes perfect sense in a language like Common Lisp or Smalltalk, which are highly dynamic, and therefore at a high-risk of being buggy, but whose compiled runtime images are expected to have extremely long lifetimes. But I don't think it makes much sense in compiled Haskell or Ocaml. We can write much more secure software, so we don't need the reflection so much, and I think having it available just opens the door to seriously misguided hackery.

But at the interpreter, I want as much reflection as I can get. In interactive theorem-proving in HOL Light and Isabelle, we work entirely in an interpreted enviroment. The interface to the system in HOL Light (and optionally in Isabelle) is the interpreter. This is very much in the Common Lisp/Smalltalk philosophy (Isabelle runs on Poly/ML which was written in Common Lisp before it became self bootstrapping). Interpreter sessions in these languages last a long time (days, possibly weeks). Thus, in Poly/ML, you need a facility to save the runtime image and reload it later. In Ocaml, we are much more impoverished, and have to resort to dmtcp.

I want reflection so that I can robustly improve my interface by adding in new functionality to my interpreter, functionality that can query any aspect of the running system. Once I can do this, I don't see any limit to how much I can make my interpreter a more sophisticated theorem proving environment. But I don't think this would just be a benefit to theorem proving. If I can make my interpreter into a better theorem prover, I can make it into a better tool for general software development.
Here we go again. First, we discover recursion.
VazScep
THREAD STARTER
 
Posts: 4590

United Kingdom (uk)
Print view this post

Re: Programming

#99  Postby mizvekov » Mar 18, 2012 12:57 am

VazScep wrote:This is a common idea in Haskell and ML. You use a data-type to define the syntax of an embedded language. We now have two languages: the language of propositional terms, and a language of proofs. The equivalent idea in Java or C# (and C++?) would be to define an inheritance hierarchy rooted at a "Proof" interface, and with implementing classes for each data constructor. You'd then use the Visitor pattern to write things such as evaluation functionality. That's a lot of code. In ML and Haskell, things are much simpler. You use algebraic data-types and evaluators are just functions which pattern-match on that data-type. Incidentally, this was the raison d'etre for algebraic data-types: to define new languages. Indeed, ML is a language for defining languages: a MetaLanguage (no pun, that's what it stood for).

Yes, I understand that, it's a cool idea.

VazScep wrote:For a real challenge, you might try one of the exercises from Mendelson, where you have to prove "⊢ (¬P → Q) -> (¬P → ¬Q) → P".

Okay, I am a bit stumped on this one. I think that I need axiom3 to prove it, and this axiom cannot be used implicitly from this proof language, so I need to instantiate it manually and then bring it into the proof by using the UseTheorem constructor.
The reason I need it is because I need the following auxiliary theorems:
1) |- (~p -> ~q) -> (q -> p)
2) |- (~p -> q) -> (~q -> p)

Number 1 is easy to do with "instL [p, q] axiom3".
Now number 2 is hard. Maybe it's just my mind being burned out from making sense of all that code and then trying to solve this, but anyway, if I do "instL [p, Not q] axiom3" then I get "|- (~p ==> ~~q) ==> ~q ==> p", and so I still have that double negation, which I have to get rid of. So I would need a theorem like: "|- (p ==> q) ==> (p ==> ~~q)", and this last one I am not seeing how to prove...
Will give it another go tomorrow.

VazScep wrote:
I should probably stop there, but I think I should say something about the next stage. One of the things I wanted to do was to implement a language for equational reasoning. That way, if you can prove that one proposition is logically equivalent to another (say that ¬¬P ↔ P), then you can substitute occurrences of the left hand side with the right hand side.

Okay, but still I need to be able to prove that one with just these 3 axioms!

VazScep wrote:The language we implement for this is called a "conversions" language. It is not based on an algebraic data-type, but is instead based on combinators. Combinators are just higher-order functions composed in a "point-free" style, so that you don't see the underlying inputs and outputs. There are interesting theoretical distinctions between these two styles of language. One language is defined by a syntax and is then given an explicit semantics. The other is defined directly by its semantics.

Yes, and I also remember that monads are another example of a combinator language, so I know I have used them, but I never tried implementing a new one yet.

VazScep wrote:
This is a pretty rich combinator language already, with some nice algebraic structure, but the goal here is to be able to write lots of new more sophisticated conversions by composing the primitive conversions. It's kind of overkill for propositional logic, but when you're working in higher-order theories, you want to use this sort of language to build powerful rewrite based automation and domain specific rewriting automation.

Okay, I will give that one a go as soon as I nail down this Bootstrap module.

VazScep wrote:I don't have any such experience. My experience over the last 10 years is that Lisp has gone from being a dead language to a fully rejuvenated one, albeit taking the form of the excellent Lisp dialect, Clojure. Despite being only a few years old, I see 3 Clojure books on Amazon, with another soon to be published by O'Reilly (despite O'Reilly having an official policy to avoid publishing Lisp books). The newsgroup comp.lang.lisp, formally a Common Lisp only group (Schemers were redirected to comp.lang.scheme), now appears to be a Clojure group.

The more appropriate question to ask, I think, is what things have suddenly got people interested in Lisp again.


Ah I see, I have been assuming it to be mostly dead, which might be an overstatement.
I knew about clojure, but I remember that it is not an offspring of academia, and was not sure about how popular it is.

VazScep wrote:I doubt it. I did my degree with the Open University. They're much more of a vocational teaching institution than a research one.

Ah ok, my mistake again :)

VazScep wrote:There was a long discussion at my current university that I was invited to participate in, discussing a replacement for the first-year Java course. No-one mentioned any research constraints. All the issues were teaching related. The main constraint was the software engineering course, a mandatory taught course in the fourth year, which is really only suitable for a Java-like language: it is based heavily around things such as UML, code-refactoring tools and GUI development. Moreover, the project involves the students choosing and contributing to a large open source project, which means that there has to be a lot of big projects out there (>100K lines) to choose from. Python, the main contender as a replacement, couldn't meet those requirements. Besides, it wasn't considered "mainstream" enough to teach.

That irks me a bit. Perhaps I am being too naive here, but I think the universities place is to teach students how to think and learn for themselves, and not to focus too much on directly teaching trade skills.

VazScep wrote:I am, I think, but only in interpreted code (you already have full RTTI at the interpreter, but you can't get those types as Haskell values without using Typeable).

Thought you meant RTTI at the compiled code, and that's why I was concerned about the optimizer.

VazScep wrote:I don't like the idea of having full RTTI in a final build, and not just because of performance issues. It makes perfect sense in a language like Common Lisp or Smalltalk, which are highly dynamic, and therefore at a high-risk of being buggy, but whose compiled runtime images are expected to have extremely long lifetimes. But I don't think it makes much sense in compiled Haskell or Ocaml. We can write much more secure software, so we don't need the reflection so much, and I think having it available just opens the door to seriously misguided hackery.

I am not so sure here. I understand that haskell makes it a lot easier for someone who is trying to make a program and prove that it is bug free, but I am not so sure if it would significantly reduce the number of bugs when it's the joe average industry programmer. There is just a lot of bugs that the type system will not catch.

VazScep wrote:But at the interpreter, I want as much reflection as I can get. In interactive theorem-proving in HOL Light and Isabelle, we work entirely in an interpreted enviroment. The interface to the system in HOL Light (and optionally in Isabelle) is the interpreter. This is very much in the Common Lisp/Smalltalk philosophy (Isabelle runs on Poly/ML which was written in Common Lisp before it became self bootstrapping). Interpreter sessions in these languages last a long time (days, possibly weeks). Thus, in Poly/ML, you need a facility to save the runtime image and reload it later. In Ocaml, we are much more impoverished, and have to resort to dmtcp.

I see. But then are these interpreter sessions secure? I mean I get the impression that whoever is using it could just do something similar to what I did and violate the encapsulation of the kernel.
mizvekov
 
Posts: 314
Age: 39
Male

Brazil (br)
Print view this post

Re: Programming

#100  Postby epepke » Mar 18, 2012 2:02 am

mizvekov wrote:Yeah that came out badly. What I meant was, in more broad terms, what kind of mistake was made so that it would start losing popularity. My own experience in the matter seem to be that they are not making it look interesting to CS students, as it happened to me, but that could be very biased. But the fact is, none of my peers became interested even slightly in LISP.
But why is that? Bad PR? is LISP specially hard to teach to the inexperienced?


Jumping in here.

I learned LISP in my first semester at MIT. LISP was the Big Deal there at the time (late 1970s), and that's what they taught the frosh (now I hear they use Python, which is very LISP-like but with syntax). I had already learned BASIC (with line numbers), COBOL, Tutor (later renamed PAL), and Z-80 assembly.

LISP was very badly taught at MIT. All the examples had unnecessary PROGs all over the place. The first example of recursion they gave was the Fibonacci sequence, which is bad, because the iterative solution is so obvious and obviously better. (A much better first recursion is the Towers of Hanoi, which is truly magical.) It took me a few years to get into LISP, but when I did, I really liked it. One of my main mental health exercises when I was stressed was to write a small LISP implementation on a new architecture.

LISP is extremely beautiful. Common LISP, however, has so much stuff in it that it strikes me as extremely unwieldy. I cannot remember who, but somebody said that every significant implementation contains a bug-ridden implementation of about half of Common LISP. I think this is true. The internal ideas of LISP and the way it handles memory are quite good, and even in programs in C and C++, it makes a lot of sense to write something like LISP internally.

But that beauty also leads to what Richard Feynman called "the computer disease." That is, computers are so wonderful that you play with them. Then you get into this exaggerated purity thing, which can stop a large project as fast as the sudden death of all people involved.

People who get really into LISP seem to become hostile to everyone else. So LISP didn't really stay with the times. It sort of lives in its own special world. It doesn't have to be this way at all; this is just what happened. I saw this in one of the posted (by @VazSkep, IIRC) links, arguing that LISP wouldn't help you with Perl, but you shouldn't be doing anything in Perl anyway. This, I think, represents a misunderstanding of what Perl is for and a failure to understand that writing beautiful programs isn't always the right thing to do.

During the 1980s, some things were happening in CS. Computers, particularly microprocessors, were moving toward stack-based architectures. O-O was becoming more popular. So was C, after a brief flirtation with Pascal (another popular first language of the time). Lots and lots of people were adapting LISP, which was part of the problem. There was little standardization. Then, when they were standardized (such as in Common LISP), they were suboptimal.

The early work in LISP, such as frames, lends itself neatly to O-O. What got put into Common Lisp was a model far too much like that of ADA. Very powerful and workable, it still doesn't lend itself well to the object magic where you actually get the improvements in programming speed and maintainability.

I really like Scheme (the "uncommon LISP" but to many LISP aficionados "not LISP"), but it has the 95% problem. It's very clean and wonderful, up to a point. With a little work, it could be perfect, but then what happens is people keep trying to implement it. One of the interesting things is that it seems like it is great for a stack-based architecture, until you get to call-with-current-continuation, and then you realize you should have adopted a different approach.

Then what you get is a bunch of implementations, none standardized. The schemes I've seen on .Net are particularly botched. Everyone has their favorite, and it becomes unavailable in a couple of years.
User avatar
epepke
 
Posts: 4080

Country: US
United States (us)
Print view this post

PreviousNext

Return to Philosophy

Who is online

Users viewing this topic: No registered users and 1 guest