Posted: Mar 18, 2012 12:57 am
VazScep wrote:This is a common idea in Haskell and ML. You use a data-type to define the syntax of an embedded language. We now have two languages: the language of propositional terms, and a language of proofs. The equivalent idea in Java or C# (and C++?) would be to define an inheritance hierarchy rooted at a "Proof" interface, and with implementing classes for each data constructor. You'd then use the Visitor pattern to write things such as evaluation functionality. That's a lot of code. In ML and Haskell, things are much simpler. You use algebraic data-types and evaluators are just functions which pattern-match on that data-type. Incidentally, this was the raison d'etre for algebraic data-types: to define new languages. Indeed, ML is a language for defining languages: a MetaLanguage (no pun, that's what it stood for).

Yes, I understand that, it's a cool idea.

VazScep wrote:For a real challenge, you might try one of the exercises from Mendelson, where you have to prove "⊢ (¬P → Q) -> (¬P → ¬Q) → P".

Okay, I am a bit stumped on this one. I think that I need axiom3 to prove it, and this axiom cannot be used implicitly from this proof language, so I need to instantiate it manually and then bring it into the proof by using the UseTheorem constructor.
The reason I need it is because I need the following auxiliary theorems:
1) |- (~p -> ~q) -> (q -> p)
2) |- (~p -> q) -> (~q -> p)

Number 1 is easy to do with "instL [p, q] axiom3".
Now number 2 is hard. Maybe it's just my mind being burned out from making sense of all that code and then trying to solve this, but anyway, if I do "instL [p, Not q] axiom3" then I get "|- (~p ==> ~~q) ==> ~q ==> p", and so I still have that double negation, which I have to get rid of. So I would need a theorem like: "|- (p ==> q) ==> (p ==> ~~q)", and this last one I am not seeing how to prove...
Will give it another go tomorrow.

VazScep wrote:
I should probably stop there, but I think I should say something about the next stage. One of the things I wanted to do was to implement a language for equational reasoning. That way, if you can prove that one proposition is logically equivalent to another (say that ¬¬P ↔ P), then you can substitute occurrences of the left hand side with the right hand side.

Okay, but still I need to be able to prove that one with just these 3 axioms!

VazScep wrote:The language we implement for this is called a "conversions" language. It is not based on an algebraic data-type, but is instead based on combinators. Combinators are just higher-order functions composed in a "point-free" style, so that you don't see the underlying inputs and outputs. There are interesting theoretical distinctions between these two styles of language. One language is defined by a syntax and is then given an explicit semantics. The other is defined directly by its semantics.

Yes, and I also remember that monads are another example of a combinator language, so I know I have used them, but I never tried implementing a new one yet.

VazScep wrote:
This is a pretty rich combinator language already, with some nice algebraic structure, but the goal here is to be able to write lots of new more sophisticated conversions by composing the primitive conversions. It's kind of overkill for propositional logic, but when you're working in higher-order theories, you want to use this sort of language to build powerful rewrite based automation and domain specific rewriting automation.

Okay, I will give that one a go as soon as I nail down this Bootstrap module.

VazScep wrote:I don't have any such experience. My experience over the last 10 years is that Lisp has gone from being a dead language to a fully rejuvenated one, albeit taking the form of the excellent Lisp dialect, Clojure. Despite being only a few years old, I see 3 Clojure books on Amazon, with another soon to be published by O'Reilly (despite O'Reilly having an official policy to avoid publishing Lisp books). The newsgroup comp.lang.lisp, formally a Common Lisp only group (Schemers were redirected to comp.lang.scheme), now appears to be a Clojure group.

The more appropriate question to ask, I think, is what things have suddenly got people interested in Lisp again.

Ah I see, I have been assuming it to be mostly dead, which might be an overstatement.
I knew about clojure, but I remember that it is not an offspring of academia, and was not sure about how popular it is.

VazScep wrote:I doubt it. I did my degree with the Open University. They're much more of a vocational teaching institution than a research one.

Ah ok, my mistake again

VazScep wrote:There was a long discussion at my current university that I was invited to participate in, discussing a replacement for the first-year Java course. No-one mentioned any research constraints. All the issues were teaching related. The main constraint was the software engineering course, a mandatory taught course in the fourth year, which is really only suitable for a Java-like language: it is based heavily around things such as UML, code-refactoring tools and GUI development. Moreover, the project involves the students choosing and contributing to a large open source project, which means that there has to be a lot of big projects out there (>100K lines) to choose from. Python, the main contender as a replacement, couldn't meet those requirements. Besides, it wasn't considered "mainstream" enough to teach.

That irks me a bit. Perhaps I am being too naive here, but I think the universities place is to teach students how to think and learn for themselves, and not to focus too much on directly teaching trade skills.

VazScep wrote:I am, I think, but only in interpreted code (you already have full RTTI at the interpreter, but you can't get those types as Haskell values without using Typeable).

Thought you meant RTTI at the compiled code, and that's why I was concerned about the optimizer.

VazScep wrote:I don't like the idea of having full RTTI in a final build, and not just because of performance issues. It makes perfect sense in a language like Common Lisp or Smalltalk, which are highly dynamic, and therefore at a high-risk of being buggy, but whose compiled runtime images are expected to have extremely long lifetimes. But I don't think it makes much sense in compiled Haskell or Ocaml. We can write much more secure software, so we don't need the reflection so much, and I think having it available just opens the door to seriously misguided hackery.

I am not so sure here. I understand that haskell makes it a lot easier for someone who is trying to make a program and prove that it is bug free, but I am not so sure if it would significantly reduce the number of bugs when it's the joe average industry programmer. There is just a lot of bugs that the type system will not catch.

VazScep wrote:But at the interpreter, I want as much reflection as I can get. In interactive theorem-proving in HOL Light and Isabelle, we work entirely in an interpreted enviroment. The interface to the system in HOL Light (and optionally in Isabelle) is the interpreter. This is very much in the Common Lisp/Smalltalk philosophy (Isabelle runs on Poly/ML which was written in Common Lisp before it became self bootstrapping). Interpreter sessions in these languages last a long time (days, possibly weeks). Thus, in Poly/ML, you need a facility to save the runtime image and reload it later. In Ocaml, we are much more impoverished, and have to resort to dmtcp.

I see. But then are these interpreter sessions secure? I mean I get the impression that whoever is using it could just do something similar to what I did and violate the encapsulation of the kernel.