Posted: Mar 16, 2012 10:23 am
by VazScep
mizvekov wrote:Well, I didn't mean for you to go that far and try to have a complete understanding of all that happened. I was simply asking for what kind of problems LISP had that you experienced to have a negative impact on adoption.
I don't have any such experience. My experience over the last 10 years is that Lisp has gone from being a dead language to a fully rejuvenated one, albeit taking the form of the excellent Lisp dialect, Clojure. Despite being only a few years old, I see 3 Clojure books on Amazon, with another soon to be published by O'Reilly (despite O'Reilly having an official policy to avoid publishing Lisp books). The newsgroup comp.lang.lisp, formally a Common Lisp only group (Schemers were redirected to comp.lang.scheme), now appears to be a Clojure group.

The more appropriate question to ask, I think, is what things have suddenly got people interested in Lisp again.

VazScep wrote:Yeah, that's the kind of answer I was looking for, but still, I don't think that's the complete picture.
I didn't expect it to be. I have no idea what the complete picture is or how to go about finding it. I was just relating you my own university experience.

Smalltalk got the axe here probably also because your university was not using it academically, I mean beyond as a teaching tool to undergraduates. The guys doing their masters and phds were probably not finding it as interesting anymore.
Do you think that was also because of industry pressure?
I doubt it. I did my degree with the Open University. They're much more of a vocational teaching institution than a research one.

There was a long discussion at my current university that I was invited to participate in, discussing a replacement for the first-year Java course. No-one mentioned any research constraints. All the issues were teaching related. The main constraint was the software engineering course, a mandatory taught course in the fourth year, which is really only suitable for a Java-like language: it is based heavily around things such as UML, code-refactoring tools and GUI development. Moreover, the project involves the students choosing and contributing to a large open source project, which means that there has to be a lot of big projects out there (>100K lines) to choose from. Python, the main contender as a replacement, couldn't meet those requirements. Besides, it wasn't considered "mainstream" enough to teach.

All of these issues seem to me to be entirely circular.

VazScep wrote:I am not such a pessimist here, I think academia has some influence on the future of programming languages, and It seems, at least to me, to be the case that LISP simply started losing appeal there.
I couldn't say. I think it's too university dependent. Edinburgh is obsessed with type-theory, but I'm not sure how many other universities are. A friend of mine is doing his PhD at Imperial College in London, and says that everyone's into Prolog down there. Now I thought Prolog was completely dead.

But even in interactive theorem proving, Common Lisp is still well-used in the form of ACL2. Several papers were presented on it at ITP last year, and several of the researchers here use it. Since it's developed at Austin, Texas, my guess is that their "Lisp and Symbolic Computation" course is likely to be pretty good.

VazScep wrote:I thought you were talking about runtime introspection.
I am, I think, but only in interpreted code (you already have full RTTI at the interpreter, but you can't get those types as Haskell values without using Typeable).

I don't like the idea of having full RTTI in a final build, and not just because of performance issues. It makes perfect sense in a language like Common Lisp or Smalltalk, which are highly dynamic, and therefore at a high-risk of being buggy, but whose compiled runtime images are expected to have extremely long lifetimes. But I don't think it makes much sense in compiled Haskell or Ocaml. We can write much more secure software, so we don't need the reflection so much, and I think having it available just opens the door to seriously misguided hackery.

But at the interpreter, I want as much reflection as I can get. In interactive theorem-proving in HOL Light and Isabelle, we work entirely in an interpreted enviroment. The interface to the system in HOL Light (and optionally in Isabelle) is the interpreter. This is very much in the Common Lisp/Smalltalk philosophy (Isabelle runs on Poly/ML which was written in Common Lisp before it became self bootstrapping). Interpreter sessions in these languages last a long time (days, possibly weeks). Thus, in Poly/ML, you need a facility to save the runtime image and reload it later. In Ocaml, we are much more impoverished, and have to resort to dmtcp.

I want reflection so that I can robustly improve my interface by adding in new functionality to my interpreter, functionality that can query any aspect of the running system. Once I can do this, I don't see any limit to how much I can make my interpreter a more sophisticated theorem proving environment. But I don't think this would just be a benefit to theorem proving. If I can make my interpreter into a better theorem prover, I can make it into a better tool for general software development.