Posted: Mar 12, 2012 5:29 am
by mizvekov
VazScep wrote:Ah, that's interesting. I'm not sure whether it's a good idea to allow records and tuples to intermix like that. It breaks the abstraction. But then, it makes sense if you seriously want to regard records as just glorified tuples.

Well, but anyway, nothing can stop you from inheriting from a tuple and then providing your custom set/get methods.

VazScep wrote:Personally, I like it. It's very much in the Haskell way of doing things, where a record with n fields is just sugar for a data constructor with n arguments.

Yes. I came to use Fusion because of another boost lib, called Spirit, which is basically a Scanner / Parser library.
With it, you can very seamlessly create a parser from a BNF grammar, and have it generate an AST from the input with very little effort. It will normally build this AST from stl containers (or their boost equivalents) and tuples. And here Fusion helps because then you can use your own custom structs instead of tuples.

VazScep wrote:
Is it also the case that templates can make inconsistent assumptions about each other? So you define a template which instantiates another in a way that is always going to lead to a type-error, and yet this is only known when you actually go ahead and instantiate that first template?

Yes :(

VazScep wrote:
I'd say the difference between C++ templates and true polymorphism is that one is a schematic quantifier while the other is semantic quantifier. That's why, in C++, you can have an explosion in the number of classes and functions produced by templates, while things remain constant in Ocaml (at the expense of needing a hashtable to look up methods rather than a vtable).

Yeah I agree.

VazScep wrote:Ah. Where would that be used? Presumably, you'd need guarantees that the original data wasn't being held by someone else?

Well it does get used implicitly in the cases where this would have no user visible effects, such as any assignment which would obliterate the original value anyway.
Or it can be used explicitly with std::move. In this case, you would be exposed to the original 'moved from' object.
There isn't much in the way of guarantees of in what state this object will be, except that they must be in a destructible state, ie the destructor will not segfault when it is called. Everything else is up to whoever implements the class to define what happens. But the STL classes are guaranteed by the standard to be left in sensible states, so for example a moved vector will be left in an empty state.
So as it is up to whoever implements it to define, you as a user at least will have the guarantee that the 'moved to' object will be in the intended state. You can assume that whoever wrote the class you are using either did not provide a move constructor or provided one which works.

VazScep wrote:I don't understand. Even if you don't declare the lambda mutable, what would happen if the lambda calls a destructive member function on the captured variable? Or is this more about assignment?

Well that depends on what you mean by "destructive member".
You can easily see what happens in the lambda case if you think about it's equivalent Functor.
So for example if you have this lambda:
Code: Select all
[b]() { b.destructive_method(); }

It's equivalent Functor will be something like:
Code: Select all
struct Lambda {
    const B b;
    Lambda(B &b_) : b(b_) {}
    void operator()() { b.destructive_method(); }
};

Whereas if you had:
Code: Select all
[b]() mutable { b.destructive_method(); }

The equivalent one is:
Code: Select all
struct Lambda {
    B b;
    Lambda(B &b_) : b(b_) {}
    void operator()() { b.destructive_method(); }
};

Notice how the constness of b differs between these two.

Now, if you had B defined as something like this:
Code: Select all
struct B {
      int val;
      void b.destructive_method() { val = 0; }
};


The first lambda example and it's equivalent Functor would both fail to compile, because destructive_method is not a "const" method.
What happens here is basically that methods in a class take a hidden parameter that is passed implicitly, called the 'this' parameter.
It is as if you had the below in C:
Code: Select all
void destructive_method(struct B *this) { this->val = 0; }

And when you try to call it by passing a const pointer to the 'this' parameter, it fails to compile because the parameters type cannot be implicitly casted.
You can fix it by doing the below instead:
Code: Select all
void b.destructive_method() const { val = 0; }

which is equivalent to:
Code: Select all
void destructive_method(const struct B *this) { this->val = 0; }

And then it will continue compilation, but fail again on "val = 0", because it will try to do "this->val = 0;" with 'this' being a const pointer, which is not allowed.
But the question then becomes, can this method be destructive even if it's pointer to it's own instance is const?
The answer is absofuckinglutelly yes, that method still has access to global data, which can be and is usually declared as non-const, it can still call OS APIs and such.
And it can even do something evil like:
Code: Select all
void b.destructive_method() const { const_cast<B*>(this)->val = 0; }

So anyway, it is up to whoever writes the class to decide what happens, and if that person wants to be a raving lunatic, the type system will not stop him.
But in general, no sane library will do something like this.

VazScep wrote:Hey awesome! Glad you gave it a try!

Ofcourse I did!

VazScep wrote:
Firstly, what are called "axioms" in the code are actually axiom schemes, right?
Yes. Though actually, in this code, everything is a scheme, axioms and theorems included.

Ah ok.
VazScep wrote:
In textbook presentations of this system, only axioms are schemes. However, you quickly realise as you go through the subsequent proof theory that everything is treated schematically: axioms, theorems and even the proofs. Technically, the proof that "P => P" is a completely different object to the proof that "Q => Q". But if you stick to this in an implementation, you're going to just see a huge explosion in the number of proof objects that you need to achieve anything (pretty much the exact situation you have with templates).

Yeah I see it now :grin:

VazScep wrote:
The alternative is to provide a new inference rule "inst" which allows you to instantiate any axiom and any theorem, by replacing variables systematically with arbitrary terms.

Yeah that's what confused me, I did not realize at the time that the parameter f was supposed to be a function which maps the original "p", "q" or "'r" to your own designators. Now it's starting to be clear how this can be used.

VazScep wrote:
I have, however, made things perhaps a little bit too abstract. Normally, you think of variables in propositional logic as just strings, but they can be drawn from any set. And so, in this implementation, I've allowed variables to be drawn from any type. Simple string based variables are things like

Code: Select all
Var "p"


but you could just as easily have numerical variables as with

Code: Select all
Var 10
.

Yeah I see, but that wasn't really a problem, the big conceptual barrier I think was the 'f' parameter like I said.

VazScep wrote:Ah, this is just a convenient trick of ghci. You've actually violated the encapsulation of the logical kernel. Save the code I provided you as a file "Propositional.hs". Then start a new file beginning with

Code: Select all
import Propositional


and load that. You'll now find that you cannot use "Theorem" as a data constructor.

Oh yes :doh:

VazScep wrote:
The three axioms provided are schematic, and their variables are represented by strings. To instantiate the axioms, you supply a function which maps those strings to new terms. So you could, for instance, write

Code: Select all
> inst (\v -> if v == "p" then Var "q" else Var "r") axiom1
Theorem (Var "q" :=>: (Var "r" :=>: Var "q"))

For convenience, I have a set of utility functions here:
Code: Select all
module Utils where

import Propositional

import Data.List
import Data.Maybe (mapMaybe)

p = Var "p"
q = Var "q"
r = Var "r"

-- Derived syntax
p \/ q  = Not p :=>: q
p /\ q  = Not (p :=>: Not q)
p <=> q = (p :=>: q) /\ (q :=>: p)
t  = p :=>: p
f  = Not t
       
printTerm :: Term String -> String
printTerm (Var p)             = p
printTerm (Not (Var p))       = "~" ++ p
printTerm (Not (Not p))       = "~" ++ printTerm (Not p)
printTerm (Not p)             = "~(" ++ printTerm p ++ ")"
printTerm ((p :=>: q) :=>: r) = "(" ++ printTerm (p :=>: q) ++ ")"
                               ++ " ==> " ++ printTerm r
printTerm (p :=>: q)          = printTerm p ++ " ==> " ++ printTerm q

printThm thm = "|- " ++ printTerm (theoremTerm thm)

-- Retrieve the variable names in a term
vars :: Eq a => Term a -> [a]
vars (Var p)    = [p]
vars (Not p)    = vars p
vars (p :=>: q) = nub $ vars p ++ vars q
       
-- Instantiate variables in a term using a lookup
instTermM :: (Eq a) => (a -> Term b) -> [(a, Term b)] -> Term a -> Term b
instTermM dflt l = instTerm f
  where f p = maybe (dflt p) id (lookup p l)
       
-- Instantiate variables in a theorem using a lookup
instM :: (Eq a) => (a -> Term b) -> [(a, Term b)] -> Theorem a -> Theorem b
instM dflt l = inst f
  where f p = maybe (dflt p) id (lookup p l)
       
-- Instantiate variables in the order that they appear in a depth-first traversal
instL :: Eq a => [Term a] -> Theorem a -> Theorem a
instL l p = instM Var (zip (vars (theoremTerm p)) l) p
       
-- Append primes to ensure that the second argument does not appear in the first
avoid :: [String] -> String -> String
avoid ps p = p ++ replicate maxPrimes '\''
  where suffixes  = mapMaybe (stripPrefix p) ps
        maxPrimes = (maximum $ 0 : map primes suffixes)
        primes cs =
          case span (== '\'') cs of
            (ps,"") -> length ps + 1
            _       -> 0
           
-- Return an instantiation after matching two terms
match :: (Eq a, Eq b) => Term a -> Term b -> [[(a, Term b)]]
match = m []
  where m is (Var p) t =
          case lookup p is of
            Nothing -> [ (p, t) : is ]
            Just t' -> [ is | t == t' ]
        m is (Not t) (Not t')        = m is t t'
        m is (a :=>: c) (a' :=>: c') = concat [ m is' c c' | is' <- m is a a' ]
        m _ _ _                      = []


With these, you can now more simply write:

Code: Select all
> instL [q, p :=>: p] axiom1
Theorem (Var "q" :=>: ((Var "p" :=>: Var "p") :=>: Var "q"))


Okay great, I will try it.

VazScep wrote:It's not obvious, don't worry. Here's a simple proof (actually the first proof from Mendelson's Mathematical Logic), showing that P => P:

Code: Select all
pp = do
  step1 <- mp (instL [p,p:=>:p,p] axiom2) (instL [p,p:=>:p] axiom1)
  return $ mp step1 (instL [p,p] axiom1)

Okay great, much better, anyway thank you again, I will try playing with it a little.

VazScep wrote:
mizvekov wrote:Real programmers use none of this patch cord crap. Give me a magnetized needle, some core memory, and a cup of tea please.
There's an emacs function for that.

Actually, there is an emacs function which can implement any function you can dream of. The only problem with it though is that you have to specify what you want it to do in LISP :crazy:
epepke wrote:Hah! New-fangled magnetics! We had to do our computing with falling dominoes!

(Actually, I did. OR gates are easy, but AND and NOT gates are doable.)

Hah, well If you can implement the NOT and the OR, you can implement the AND as ~(~a | ~b)
But I imagine the biggest problem with such a machine is getting it synchronized.
Or can you implement a 'clock' for it? :crazy: