Moderators: kiore, The_Metatron, Blip
VazScep wrote:Interesting. I thought that GTK was the C library and Qt very much the modern C++ library. I suppose they've been around long enough to both be "outdated" in their own ways.
VazScep wrote:
Though since using languages outside the C family, I've come to appreciate technologies like GTK and OpenGL, which use such simple C code that they become almost trivial to bind to from other languages.
VazScep wrote:Ugh. No fucking idea, but you might well be right about Java's GUI situation. But I'm not sure Microsoft is doing any better. While looking into GUI development on Windows, I was put off by how quickly Microsoft moved from Windows Forms to Windows Presentation Foundation, and by rumours that Microsoft might be ditching Silverlight and moving to Javascript, and then with their rehaul of the look and feel of their widgets with Windows 8. I've read numerous concerns from developers saying "what does Microsoft want us to use?"
VazScep wrote:
GUIs to me seem like a problem that no-one in the software industry has ever really figured out, possibly because a user's expectations of a GUI aren't particularly amenable to precise engineering. I'm only messing around with GUIs as part of a hobby project, and I'm doing it in Ocaml using the Ocsigen framework, which has so far blown me away. But even their framework has yet to commit on using callbacks for event handling or to go with the more declarative reactive frameworks.
VazScep wrote:Ah, okay. It would probably vary with the universities anyway. Being at Edinburgh, ML and Haskell are a pretty big deal. Phil Wadler's the professor of computer science here, a position once held by Robin Milner.
VazScep wrote:Cool. Well, I don't know if I'm going to be able to explain it very well. But here's the most basic idea. Suppose you want to prove stuff in propositional logic. You could do this by writing a SAT solver, but this has a few drawbacks: firstly, the code is probably going to be very complicated, and there's every chance it will contain the odd bug, which is useless for mission-critical verification; secondly, it's not going to scale to logics which are undecidable, such as first-order logic. So you instead work from axiomatics. It turns out that everything in propositional logic can be proven by instantiating the following axioms, and applying modus ponens appropriately:
1) P -> (Q -> P)
2) (P -> Q -> R) -> (P -> Q) -> (P -> R)
3) (~P -> ~Q) -> (Q -> P)
You can code this up as follows:Here, the syntax of propositional logic is captured by a simple data-type. The theorems are encapsulated as a type "Theorem" whose data constructors we do not export. Thus, the only theorems that clients can construct are "axiom1", "axiom2", "axiom3"; instantiations of the form "instTerm f thm"; and theorems resulting from modus ponens. It is impossible for a client to ever construct a theorem such as "p -> ~p", since this is not a tautology. The only theorems that can be constructed are precisely the theorems of propositional logic.
- Code: Select all
-- An encapsulated logical kernel for a Hilbert-style Propositional Calculus
module Propositional (Theorem, Term(..), axiom1, axiom2, axiom3, mp, instTerm, inst,
theoremTerm) where
infixr 8 :=>:
data Term a = Var a
| Term a :=>: Term a
| Not (Term a)
deriving (Show,Eq,Ord)
newtype Theorem a = Theorem (Term a) deriving (Eq, Ord, Show)
theoremTerm :: Theorem a -> Term a
theoremTerm (Theorem t) = t
(p,q,r) = (Var "p",Var "q",Var "r")
axiom1 :: Theorem String
axiom1 = Theorem (p :=>: q :=>: p)
axiom2 :: Theorem String
axiom2 = Theorem ((p :=>: q :=>: r) :=>: (p :=>: q) :=>: (p :=>: r))
axiom3 :: Theorem String
axiom3 = Theorem ((Not p :=>: Not q) :=>: q :=>: p)
mp :: Eq a => Theorem a -> Theorem a -> [Theorem a]
mp (Theorem (p :=>: q)) (Theorem p') = [Theorem q | p == p']
instTerm :: (a -> Term b) -> Term a -> Term b
instTerm f (Var p) = f p
instTerm f (Not t) = Not (instTerm f t)
instTerm f (a :=>: c) = instTerm f a :=>: instTerm f c
inst :: (a -> Term b) -> Theorem a -> Theorem b
inst f (Theorem t) = Theorem (instTerm f t)
VazScep wrote:
This is called a "logical kernel" and the advantage of it is that it is made up of very little code and it is easy to verify as bug-free (there isn't even a single loop to consider). So the security of the code here is very secure. You can be sure that clients cannot use this code in ways to generate non-theorems.
On top of this, you wrap a whole load of additional functionality, probably culminating in a complete tautology-checker. But the great thing about the tautology-checker is that its outputs are guaranteed correct with respect to the kernel. The tautology checker would probably input a Term, and if the term is a tautology, it would output a Theorem. And since the only ways to construct theorems are to apply the rules of kernel, you can be absolutely sure that if it does output a theorem, that theorem really is a tautology.
This is just the beginning. The design of the proof tools you build on top are something else.
VazScep wrote:Okay, so exponential_distribution is a first-class function? And one thing we're seeing here is the use of templates to parameterise a first-class function? And presumably, there is also partial specialisation going on, to tailor the implementation to doubles as opposed to some other type?
template<typename RealType> class exponential_dsitribution { ... }
struct B {
int operator()(int a, int b) { return a + b; }
};
B b;
int c = b(2, 3); // c = 5
struct B {
int val;
B(int v) : val(v) {}
~B() { std::cout << "bye " << val << std::endl; }
};
std::function<int(int)> f(std::shared_ptr<B> b) {
//here is different, we use Functors instead of
//return [b](int a) -> int { return a + b->val; };
//notice how it is much more verbose
struct Functor {
std::shared_ptr<B> b;
Functor(std::shared_ptr<B> b_) : b(b_) {}
int operator()(int a) { return a + b->val; }
};
Functor f(b);
return f;
}
std::function<int(int)> g() {
auto a = std::make_shared<B>(10), b = std::make_shared<B>(20);
return f(a);
}
int main() {
{
auto foo = g();
auto bar = std::bind(foo, 1);
std::cout << foo(2) << ' ' << bar() << std::endl;
}
std::cout << "the end" << std::endl;
}
struct Foo {
int var;
std::function<void()> bar() {
return [this]() {
var = 10; // accessing the parents 'var' field implicitly through the this pointer
// it's something more cumbersome to do with Functors...
};
}
};
[b]() -> int mutable { return b++; }
[=]( ... ) { ... } //capture all variables by value
[&]( ... ) { ... } //capture all variables by reference
[&, b]( ... ) { ... } //capture all variables by reference, except b, which is to be captured by value
VazScep wrote:Can you use bind with any function, or does it have to be declared in a special way? And how does this interact with the object system?
VazScep wrote:
Can I get a virtual function of a class as a first-class function, and have it close over its instance?
std::default_random_engine eng;
std::exponential_distribution<double> dist(1.0);
auto random = std::bind(dist, eng);
for (;;)
std::cout << random() << std::endl;
auto random = std::bind(std::exponential_distribution<double>(1.0), std::default_random_engine());
VazScep wrote:
That is very sweet. Have you compared it much to the random generators in Haskell? It looks pretty competitive in terms of expressive power (speedwise, forget it).
stalidon wrote:Oh yeah, I'm all for entertainment. Wouldn't dream of having a dev team without it.
In the general 'industry' however... they want you to produce 'readable' code. That is, code that can be read by an 'analyst' that specializes in Excel macros.
stalidon wrote:
(Not laughing at you jamest, but to even considering that idea in my head.)
Programming, especially the programming 'industry' is plagued with religious wars about whats the best OS, programming paradigm, programming team management methodology, etc. The ironic part is, of course, it's all about what's the 'perfect toy'. There's absolutely no science in 'what's the best methodology to build a Lego tower'.
mizvekov wrote:Yes, there are a few additions to the standard, that came from boost, that will help here.
You can start by using std::shared_ptr, which is a reference counted pointer container, that will delete automatically when the count reaches 0.
A student came to the master and said, I figured out memory management! All we have to do is have a counter that is incremented when something is set to point to it and decremented when it is set elsewhere. Just delete when the reference gets to zero. The master smiled and repliedA student came to the master and said, I figured out memory management! All we have to do is have a counter that is incremented when something is set to point to it and decremented when it is set elsewhere. Just delete when the reference gets to zero. The master smiled and replied...
epepke wrote:
A student came to the master and said, I figured out memory management! All we have to do is have a counter that is incremented when something is set to point to it and decremented when it is set elsewhere. Just delete when the reference gets to zero. The master smiled and replied
edit: Actually, I'm really not interested in this.think wrote:I can see that this might be useful for a neuroscientist who has to order vast amounts of data into a comprehensible mathematical model...I just don't see what application it has for philosophy.
Scarlett and Ironclad wrote:Campermon,...a middle aged, middle class, Guardian reading, dad of four, knackered hippy, woolly jumper wearing wino and science teacher.
Yes. They effectively appear in Java as well, when you have a class with a single method such as "doIt".mizvekov wrote:So yes, it is parametrized to work with any of the float types.
The thing is that instances of classes can be used as functions, if you overload the function call operator '()' for them. These
kinds of classes are usually called Functors in the literature. Here is an example:
- Code: Select all
struct B {
int operator()(int a, int b) { return a + b; }
};
With this you can do:
- Code: Select all
B b;
int c = b(2, 3); // c = 5
And indeed, these functors are what many people used to implement lambdas in the past before the standard supported it.
let new_counter reset =
let m_x = ref reset in
let incr () = m_x := !m_x + 1 in
let reset () = m_x := reset in
let get () = !m_x in
(incr, reset, get)
let counter_incr (incr,_,_) = incr ()
let counter_reset (_,reset,_) = reset ()
let counter_get (_,_,get) = get ()
let foo x = x#bar + List.length x#baz
val foo : < bar : int; baz : 'a list; .. > -> int = <fun>
Okay. I hadn't actually noticed this, but yes, it'd have hoped there would be a mechanism which did the variable capture automatically.For example, here is how you would have written the example I gave using Functors instead:
- Code: Select all
struct B {
int val;
B(int v) : val(v) {}
~B() { std::cout << "bye " << val << std::endl; }
};
std::function<int(int)> f(std::shared_ptr<B> b) {
//here is different, we use Functors instead of
//return [b](int a) -> int { return a + b->val; };
//notice how it is much more verbose
struct Functor {
std::shared_ptr<B> b;
Functor(std::shared_ptr<B> b_) : b(b_) {}
int operator()(int a) { return a + b->val; }
};
Functor f(b);
return f;
}
std::function<int(int)> g() {
auto a = std::make_shared<B>(10), b = std::make_shared<B>(20);
return f(a);
}
int main() {
{
auto foo = g();
auto bar = std::bind(foo, 1);
std::cout << foo(2) << ' ' << bar() << std::endl;
}
std::cout << "the end" << std::endl;
}
It's much more cumbersome, but you can think of it as essentially the same thing as lambdas. Indeed, this is probably how the compilers implement them internally. The differences though are that there is no "this" pointer of the lambda itself which you can access, and that the variables which are captured are declared as "const" by default.
The first point allows you to use the parents 'this' pointer seamlessly.
Example:
- Code: Select all
struct Foo {
int var;
std::function<void()> bar() {
return [this]() {
var = 10; // accessing the parents 'var' field implicitly through the this pointer
// it's something more cumbersome to do with Functors...
};
}
};
The second point can be changed by declaring the lambda mutable, like this:
- Code: Select all
[b]() -> int mutable { return b++; }
With the above, only the lambda's internal copy of b changes.
So you can have lambdas with mutable state, which can sometimes be necessary.
By the way, as an aside, I have been declaring the lambdas to capture explicitly, just to be more clear what is happening,
but you can capture implicitly.
Examples:
- Code: Select all
[=]( ... ) { ... } //capture all variables by value
[&]( ... ) { ... } //capture all variables by reference
[&, b]( ... ) { ... } //capture all variables by reference, except b, which is to be captured by value
And variables are only captured if either they are captured explicitly, or else there is a default capture AND they are actually used in the code.
So it is pretty safe to just capture by default.
Okay, that's very cool.VazScep wrote:std::bind is actually declared as a template, but you don't have to specify the type manually because it can infer from the type of the target. That means that you can use it with any type which you can use the function call operator, and that means any conventional function, class methods, Functors (which are the classes that overload the function call operator, as I said above), lambdas, and most probably anything a future standard defines which satisfies the requirement.
Yes, my C++ knowledge is very rusty. I'd even forgotten entirely about initialisation lists, but suddenly remembered them when I saw the code!VazScep wrote:By the way, as another aside, I think I may have made that example more confusing by writing it as a one liner.
VazScep wrote:Yes. They effectively appear in Java as well, when you have a class with a single method such as "doIt".
There is a limited correspondence between lambdas and classes. Just as lambdas can be classes with syntax sugar, a lot of classes can be records of lambdas. For instance, in Standard ML, I could fake the implementation of a Counter class which keeps track of a value that can be reset as follows:
- Code: Select all
let new_counter reset =
let m_x = ref reset in
let incr () = m_x := !m_x + 1 in
let reset () = m_x := reset in
let get () = !m_x in
(incr, reset, get)
let counter_incr (incr,_,_) = incr ()
let counter_reset (_,reset,_) = reset ()
let counter_get (_,_,get) = get ()
The ugliness of tuples and those last three functions is usually buried under the syntax sugar of records (amusingly, a number of Lisps implement records as nothing more than a bunch of macros to write exactly this sort of code).
You can even have encapsulation based on which values you include in the returned tuple: here, I leave m_x as a "private" value, since it cannot be accessed outside the function.
VazScep wrote:
The only thing I think that is significantly missing from this picture is subtyping. And for this reason, Ocaml has its own object system, and supports subtyping in a manner which is definitely worth a look. The idea is that the subtyping is structural rather than nominal. In virtually all languages with subtyping, including C++, the type hierarchy is a directed graph which is explicitly coded by the programmer by associating each class with its named supertypes. In Ocaml, one class is a subtype of another just in case it supports all of the other's methods (this is sometimes called "duck-typing" by Python programmers, despite Python not having a type-system). Moreover, structural types can be anonymous using so-called "row-polymorphism."
VazScep wrote:
Suppose I define a function foo which takes an x and invokes its bar and baz methods as follows (# accesses instance methods):
- Code: Select all
let foo x = x#bar + List.length x#baz
Then Ocaml will infer the type of foo as:
- Code: Select all
val foo : < bar : int; baz : 'a list; .. > -> int = <fun>
This says that for any type 'a, foo is a function which takes some object supporting both a "bar : int" method and a "baz : 'a list" method, and returns an int. It can now be used with any instance of a class which happens to support these methods, even if there is no explicit subtyping involved.
VazScep wrote:Okay. I hadn't actually noticed this, but yes, it'd have hoped there would be a mechanism which did the variable capture automatically.
On this matter, this is where I still feel C++ has got things right compared with nearly every other OO language: having a copy-semantics, a reference-semantics and a const keyword, so that reasoning about mutability isn't such a nightmare.
VazScep wrote:
Here, I'm guessing that if you want a lambda to capture a variable that will be mutated from outside, and you want the lambda to see the update, you would generally use capture by reference?
VazScep wrote:
Other times, you might want the lambda to have its own copy, and you would use mutable (for primitive types?) and use capture by value otherwise?
VazScep wrote:Yes, my C++ knowledge is very rusty. I'd even forgotten entirely about initialisation lists, but suddenly remembered them when I saw the code!
*Propositional> let t1 = inst theoremTerm (Theorem (Var axiom1 :=>: Var axiom3))
*Propositional> let t2 = inst theoremTerm (Theorem (Var axiom1))
*Propositional> mp t1 t2
[Theorem ((Not (Var "p") :=>: Not (Var "q")) :=>: (Var "q" :=>: Var "p"))]
Scarlett and Ironclad wrote:Campermon,...a middle aged, middle class, Guardian reading, dad of four, knackered hippy, woolly jumper wearing wino and science teacher.
Ah, that's interesting. I'm not sure whether it's a good idea to allow records and tuples to intermix like that. It breaks the abstraction. But then, it makes sense if you seriously want to regard records as just glorified tuples.mizvekov wrote:Cool. c++ has tuples now, with std::tuples. But they are "disconnected" from ordinary struct types which have basically the same form. boost Fusion is something that proposes to fix this, so you can declare an ordinary class and "map" it into an equivalent tuple, so you can mix the two forms freely.
It should be more powerful than C++ templates, much as simple polymorphism is (in some respects) more powerful than C++ templates. In particular, the binding is not resolved at compile-time. Thus, it is possible to load a native code library with a new set of class definitions at runtime, and have them work seamlessly with your existing duck-typed functions.VazScep wrote:Again, very cool. I suppose you have duck typing in C++ when you use template parameters. But there is no way to specify up front what characteristics a type must have to be acceptable, so it stays implicit in the code. When you pass the wrong type, the error happens inside the template definition, sometimes many layers deep, instead of at the point of instantiation.
Is it also the case that templates can make inconsistent assumptions about each other? So you define a template which instantiates another in a way that is always going to lead to a type-error, and yet this is only known when you actually go ahead and instantiate that first template?VazScep wrote:That's a capability missing from the c++ new standard. One of the reasons it's infamous for it's incomprehensible error messages is because, like I said, there is no way to specify restrictions on the types that can be passed as template parameters. So usually when you pass a nonsensical type, the error happens in implementation dependent code, sometimes many layers deep.
Ah. Where would that be used? Presumably, you'd need guarantees that the original data wasn't being held by someone else?Yes. The new standard also defines a move semantics, and so you can declare a special kind of constructor, called a move constructor, which can "rob" another instance's internal data instead of copying it as in a copy constructor. That way it can make it pretty inexpensive to return complex data types, like std::vector, by value, where in the old standard there would be a superfluous copy of allocated data followed by the original being deleted.
I don't understand. Even if you don't declare the lambda mutable, what would happen if the lambda calls a destructive member function on the captured variable? Or is this more about assignment?VazScep wrote:Well capture by value/reference and mutable are orthogonal things. If you want the lambda to have it's own copy AND want to be able to modify it, then you would capture by value and declare the lambda as mutable. Likewise, if you want to see external modifications to that variable, AND also be able to modify that external variable, you would capture by reference and declare the lambda mutable.
Hey awesome! Glad you gave it a try!By the way, I have been playing with that example you sent me, but I am a bit confused about somethings.
Yes. Though actually, in this code, everything is a scheme, axioms and theorems included.Firstly, what are called "axioms" in the code are actually axiom schemes, right?
Var "p"
Var 10
Ah, this is just a convenient trick of ghci. You've actually violated the encapsulation of the logical kernel. Save the code I provided you as a file "Propositional.hs". Then start a new file beginning withSo for example, I can use that code to do things like this:
- Code: Select all
*Propositional> let t1 = inst theoremTerm (Theorem (Var axiom1 :=>: Var axiom3))
*Propositional> let t2 = inst theoremTerm (Theorem (Var axiom1))
import Propositional
> inst (\v -> if v == "p" then Var "q" else Var "r") axiom1
Theorem (Var "q" :=>: (Var "r" :=>: Var "q"))
module Utils where
import Propositional
import Data.List
import Data.Maybe (mapMaybe)
p = Var "p"
q = Var "q"
r = Var "r"
-- Derived syntax
p \/ q = Not p :=>: q
p /\ q = Not (p :=>: Not q)
p <=> q = (p :=>: q) /\ (q :=>: p)
t = p :=>: p
f = Not t
printTerm :: Term String -> String
printTerm (Var p) = p
printTerm (Not (Var p)) = "~" ++ p
printTerm (Not (Not p)) = "~" ++ printTerm (Not p)
printTerm (Not p) = "~(" ++ printTerm p ++ ")"
printTerm ((p :=>: q) :=>: r) = "(" ++ printTerm (p :=>: q) ++ ")"
++ " ==> " ++ printTerm r
printTerm (p :=>: q) = printTerm p ++ " ==> " ++ printTerm q
printThm thm = "|- " ++ printTerm (theoremTerm thm)
-- Retrieve the variable names in a term
vars :: Eq a => Term a -> [a]
vars (Var p) = [p]
vars (Not p) = vars p
vars (p :=>: q) = nub $ vars p ++ vars q
-- Instantiate variables in a term using a lookup
instTermM :: (Eq a) => (a -> Term b) -> [(a, Term b)] -> Term a -> Term b
instTermM dflt l = instTerm f
where f p = maybe (dflt p) id (lookup p l)
-- Instantiate variables in a theorem using a lookup
instM :: (Eq a) => (a -> Term b) -> [(a, Term b)] -> Theorem a -> Theorem b
instM dflt l = inst f
where f p = maybe (dflt p) id (lookup p l)
-- Instantiate variables in the order that they appear in a depth-first traversal
instL :: Eq a => [Term a] -> Theorem a -> Theorem a
instL l p = instM Var (zip (vars (theoremTerm p)) l) p
-- Append primes to ensure that the second argument does not appear in the first
avoid :: [String] -> String -> String
avoid ps p = p ++ replicate maxPrimes '\''
where suffixes = mapMaybe (stripPrefix p) ps
maxPrimes = (maximum $ 0 : map primes suffixes)
primes cs =
case span (== '\'') cs of
(ps,"") -> length ps + 1
_ -> 0
-- Return an instantiation after matching two terms
match :: (Eq a, Eq b) => Term a -> Term b -> [[(a, Term b)]]
match = m []
where m is (Var p) t =
case lookup p is of
Nothing -> [ (p, t) : is ]
Just t' -> [ is | t == t' ]
m is (Not t) (Not t') = m is t t'
m is (a :=>: c) (a' :=>: c') = concat [ m is' c c' | is' <- m is a a' ]
m _ _ _ = []
> instL [q, p :=>: p] axiom1
Theorem (Var "q" :=>: ((Var "p" :=>: Var "p") :=>: Var "q"))
It's not obvious, don't worry. Here's a simple proof (actually the first proof from Mendelson's Mathematical Logic), showing that P => P:Which is alright and dandy, unless I am using that thing not the intended way, but I can't actually see how that can be used to represent a proof of a propositional logic theorem.
pp = do
step1 <- mp (instL [p,p:=>:p,p] axiom2) (instL [p,p:=>:p] axiom1)
return $ mp step1 (instL [p,p] axiom1)
There's an emacs function for that.mizvekov wrote:Real programmers use none of this patch cord crap. Give me a magnetized needle, some core memory, and a cup of tea please.
mizvekov wrote:Real programmers use none of this patch cord crap. Give me a magnetized needle, some core memory, and a cup of tea please.
VazScep wrote:Ah, that's interesting. I'm not sure whether it's a good idea to allow records and tuples to intermix like that. It breaks the abstraction. But then, it makes sense if you seriously want to regard records as just glorified tuples.
VazScep wrote:Personally, I like it. It's very much in the Haskell way of doing things, where a record with n fields is just sugar for a data constructor with n arguments.
VazScep wrote:
Is it also the case that templates can make inconsistent assumptions about each other? So you define a template which instantiates another in a way that is always going to lead to a type-error, and yet this is only known when you actually go ahead and instantiate that first template?
VazScep wrote:
I'd say the difference between C++ templates and true polymorphism is that one is a schematic quantifier while the other is semantic quantifier. That's why, in C++, you can have an explosion in the number of classes and functions produced by templates, while things remain constant in Ocaml (at the expense of needing a hashtable to look up methods rather than a vtable).
VazScep wrote:Ah. Where would that be used? Presumably, you'd need guarantees that the original data wasn't being held by someone else?
VazScep wrote:I don't understand. Even if you don't declare the lambda mutable, what would happen if the lambda calls a destructive member function on the captured variable? Or is this more about assignment?
[b]() { b.destructive_method(); }
struct Lambda {
const B b;
Lambda(B &b_) : b(b_) {}
void operator()() { b.destructive_method(); }
};
[b]() mutable { b.destructive_method(); }
struct Lambda {
B b;
Lambda(B &b_) : b(b_) {}
void operator()() { b.destructive_method(); }
};
struct B {
int val;
void b.destructive_method() { val = 0; }
};
void destructive_method(struct B *this) { this->val = 0; }
void b.destructive_method() const { val = 0; }
void destructive_method(const struct B *this) { this->val = 0; }
void b.destructive_method() const { const_cast<B*>(this)->val = 0; }
VazScep wrote:Hey awesome! Glad you gave it a try!
VazScep wrote:Yes. Though actually, in this code, everything is a scheme, axioms and theorems included.Firstly, what are called "axioms" in the code are actually axiom schemes, right?
VazScep wrote:
In textbook presentations of this system, only axioms are schemes. However, you quickly realise as you go through the subsequent proof theory that everything is treated schematically: axioms, theorems and even the proofs. Technically, the proof that "P => P" is a completely different object to the proof that "Q => Q". But if you stick to this in an implementation, you're going to just see a huge explosion in the number of proof objects that you need to achieve anything (pretty much the exact situation you have with templates).
VazScep wrote:
The alternative is to provide a new inference rule "inst" which allows you to instantiate any axiom and any theorem, by replacing variables systematically with arbitrary terms.
VazScep wrote:
I have, however, made things perhaps a little bit too abstract. Normally, you think of variables in propositional logic as just strings, but they can be drawn from any set. And so, in this implementation, I've allowed variables to be drawn from any type. Simple string based variables are things like
- Code: Select all
Var "p"
but you could just as easily have numerical variables as with.
- Code: Select all
Var 10
VazScep wrote:Ah, this is just a convenient trick of ghci. You've actually violated the encapsulation of the logical kernel. Save the code I provided you as a file "Propositional.hs". Then start a new file beginning with
- Code: Select all
import Propositional
and load that. You'll now find that you cannot use "Theorem" as a data constructor.
VazScep wrote:
The three axioms provided are schematic, and their variables are represented by strings. To instantiate the axioms, you supply a function which maps those strings to new terms. So you could, for instance, write
- Code: Select all
> inst (\v -> if v == "p" then Var "q" else Var "r") axiom1
Theorem (Var "q" :=>: (Var "r" :=>: Var "q"))
For convenience, I have a set of utility functions here:
- Code: Select all
module Utils where
import Propositional
import Data.List
import Data.Maybe (mapMaybe)
p = Var "p"
q = Var "q"
r = Var "r"
-- Derived syntax
p \/ q = Not p :=>: q
p /\ q = Not (p :=>: Not q)
p <=> q = (p :=>: q) /\ (q :=>: p)
t = p :=>: p
f = Not t
printTerm :: Term String -> String
printTerm (Var p) = p
printTerm (Not (Var p)) = "~" ++ p
printTerm (Not (Not p)) = "~" ++ printTerm (Not p)
printTerm (Not p) = "~(" ++ printTerm p ++ ")"
printTerm ((p :=>: q) :=>: r) = "(" ++ printTerm (p :=>: q) ++ ")"
++ " ==> " ++ printTerm r
printTerm (p :=>: q) = printTerm p ++ " ==> " ++ printTerm q
printThm thm = "|- " ++ printTerm (theoremTerm thm)
-- Retrieve the variable names in a term
vars :: Eq a => Term a -> [a]
vars (Var p) = [p]
vars (Not p) = vars p
vars (p :=>: q) = nub $ vars p ++ vars q
-- Instantiate variables in a term using a lookup
instTermM :: (Eq a) => (a -> Term b) -> [(a, Term b)] -> Term a -> Term b
instTermM dflt l = instTerm f
where f p = maybe (dflt p) id (lookup p l)
-- Instantiate variables in a theorem using a lookup
instM :: (Eq a) => (a -> Term b) -> [(a, Term b)] -> Theorem a -> Theorem b
instM dflt l = inst f
where f p = maybe (dflt p) id (lookup p l)
-- Instantiate variables in the order that they appear in a depth-first traversal
instL :: Eq a => [Term a] -> Theorem a -> Theorem a
instL l p = instM Var (zip (vars (theoremTerm p)) l) p
-- Append primes to ensure that the second argument does not appear in the first
avoid :: [String] -> String -> String
avoid ps p = p ++ replicate maxPrimes '\''
where suffixes = mapMaybe (stripPrefix p) ps
maxPrimes = (maximum $ 0 : map primes suffixes)
primes cs =
case span (== '\'') cs of
(ps,"") -> length ps + 1
_ -> 0
-- Return an instantiation after matching two terms
match :: (Eq a, Eq b) => Term a -> Term b -> [[(a, Term b)]]
match = m []
where m is (Var p) t =
case lookup p is of
Nothing -> [ (p, t) : is ]
Just t' -> [ is | t == t' ]
m is (Not t) (Not t') = m is t t'
m is (a :=>: c) (a' :=>: c') = concat [ m is' c c' | is' <- m is a a' ]
m _ _ _ = []
With these, you can now more simply write:
- Code: Select all
> instL [q, p :=>: p] axiom1
Theorem (Var "q" :=>: ((Var "p" :=>: Var "p") :=>: Var "q"))
VazScep wrote:It's not obvious, don't worry. Here's a simple proof (actually the first proof from Mendelson's Mathematical Logic), showing that P => P:
- Code: Select all
pp = do
step1 <- mp (instL [p,p:=>:p,p] axiom2) (instL [p,p:=>:p] axiom1)
return $ mp step1 (instL [p,p] axiom1)
epepke wrote:Hah! New-fangled magnetics! We had to do our computing with falling dominoes!
(Actually, I did. OR gates are easy, but AND and NOT gates are doable.)
mizvekov wrote:VazScep wrote:Interesting. I thought that GTK was the C library and Qt very much the modern C++ library. I suppose they've been around long enough to both be "outdated" in their own ways.
Yes, it is pretty outdated in terms of the c++ feature set it uses. And again, Qt needs a precompiler on top of it all, so you have to keep it in mind that it is even cheatingVazScep wrote:
Though since using languages outside the C family, I've come to appreciate technologies like GTK and OpenGL, which use such simple C code that they become almost trivial to bind to from other languages.
Yes, some languages do indeed make it very straightforward to bind to C, something that can't be done well with c++ due to the complex and compiler dependent abi. Don't think that is going to change in the next 20 years.
Users viewing this topic: No registered users and 1 guest