Programming

on fundamental matters such as existence, knowledge, values, reason, mind and ethics.

Moderators: Calilasseia, ADParker

Re: Programming

#41  Postby think » Mar 11, 2012 12:20 am

I can see that this might be useful for a neuroscientist who has to order vast amounts of data into a comprehensible mathematical model...I just don't see what application it has for philosophy.
think
 
Posts: 628

Print view this post

Ads by Google


Re: Programming

#42  Postby mizvekov » Mar 11, 2012 12:35 am

VazScep wrote:Interesting. I thought that GTK was the C library and Qt very much the modern C++ library. I suppose they've been around long enough to both be "outdated" in their own ways.

Yes, it is pretty outdated in terms of the c++ feature set it uses. And again, Qt needs a precompiler on top of it all, so you have to keep it in mind that it is even cheating :)

VazScep wrote:
Though since using languages outside the C family, I've come to appreciate technologies like GTK and OpenGL, which use such simple C code that they become almost trivial to bind to from other languages.

Yes, some languages do indeed make it very straightforward to bind to C, something that can't be done well with c++ due to the complex and compiler dependent abi. Don't think that is going to change in the next 20 years.

VazScep wrote:Ugh. No fucking idea, but you might well be right about Java's GUI situation. But I'm not sure Microsoft is doing any better. While looking into GUI development on Windows, I was put off by how quickly Microsoft moved from Windows Forms to Windows Presentation Foundation, and by rumours that Microsoft might be ditching Silverlight and moving to Javascript, and then with their rehaul of the look and feel of their widgets with Windows 8. I've read numerous concerns from developers saying "what does Microsoft want us to use?"

Yes I got the same reaction, although I thought that windows forms was just meant for "backwards compatibility" (for old programmers used to the native windows api)
VazScep wrote:
GUIs to me seem like a problem that no-one in the software industry has ever really figured out, possibly because a user's expectations of a GUI aren't particularly amenable to precise engineering. I'm only messing around with GUIs as part of a hobby project, and I'm doing it in Ocaml using the Ocsigen framework, which has so far blown me away. But even their framework has yet to commit on using callbacks for event handling or to go with the more declarative reactive frameworks.

The whole thing keeps changing a lot anyway. Few years ago most of us wouldn't consider touch screen interfaces, much less multitouch, and now in a few years we will have to consider haptic interfaces.

VazScep wrote:Ah, okay. It would probably vary with the universities anyway. Being at Edinburgh, ML and Haskell are a pretty big deal. Phil Wadler's the professor of computer science here, a position once held by Robin Milner.

Yes, also LUA was a pretty big deal here anyway.

VazScep wrote:Cool. Well, I don't know if I'm going to be able to explain it very well. But here's the most basic idea. Suppose you want to prove stuff in propositional logic. You could do this by writing a SAT solver, but this has a few drawbacks: firstly, the code is probably going to be very complicated, and there's every chance it will contain the odd bug, which is useless for mission-critical verification; secondly, it's not going to scale to logics which are undecidable, such as first-order logic. So you instead work from axiomatics. It turns out that everything in propositional logic can be proven by instantiating the following axioms, and applying modus ponens appropriately:

1) P -> (Q -> P)
2) (P -> Q -> R) -> (P -> Q) -> (P -> R)
3) (~P -> ~Q) -> (Q -> P)

You can code this up as follows:

Code: Select all
-- An encapsulated logical kernel for a Hilbert-style Propositional Calculus

module Propositional (Theorem, Term(..), axiom1, axiom2, axiom3, mp, instTerm, inst,
                      theoremTerm) where

infixr 8 :=>:

data Term a = Var a
            | Term a :=>: Term a
            | Not (Term a)
            deriving (Show,Eq,Ord)

newtype Theorem a = Theorem (Term a) deriving (Eq, Ord, Show)

theoremTerm :: Theorem a -> Term a
theoremTerm (Theorem t) = t

(p,q,r) = (Var "p",Var "q",Var "r")

axiom1 :: Theorem String
axiom1 = Theorem (p :=>: q :=>: p)

axiom2 :: Theorem String
axiom2 = Theorem ((p :=>: q :=>: r) :=>: (p :=>: q) :=>: (p :=>: r))

axiom3 :: Theorem String
axiom3 = Theorem ((Not p :=>: Not q) :=>: q :=>: p)

mp :: Eq a => Theorem a -> Theorem a -> [Theorem a]
mp (Theorem (p :=>: q)) (Theorem p') = [Theorem q | p == p']
                                                 
instTerm :: (a -> Term b) -> Term a -> Term b
instTerm f (Var p)    = f p
instTerm f (Not t)    = Not (instTerm f t)
instTerm f (a :=>: c) = instTerm f a :=>: instTerm f c

inst :: (a -> Term b) -> Theorem a -> Theorem b
inst f (Theorem t) = Theorem (instTerm f t)
Here, the syntax of propositional logic is captured by a simple data-type. The theorems are encapsulated as a type "Theorem" whose data constructors we do not export. Thus, the only theorems that clients can construct are "axiom1", "axiom2", "axiom3"; instantiations of the form "instTerm f thm"; and theorems resulting from modus ponens. It is impossible for a client to ever construct a theorem such as "p -> ~p", since this is not a tautology. The only theorems that can be constructed are precisely the theorems of propositional logic.

Heh pretty sweet. I already had an idea this kind of thing was possible, but I thought it would look much more complicated than that. Powerful type system indeed :)

VazScep wrote:
This is called a "logical kernel" and the advantage of it is that it is made up of very little code and it is easy to verify as bug-free (there isn't even a single loop to consider). So the security of the code here is very secure. You can be sure that clients cannot use this code in ways to generate non-theorems.

On top of this, you wrap a whole load of additional functionality, probably culminating in a complete tautology-checker. But the great thing about the tautology-checker is that its outputs are guaranteed correct with respect to the kernel. The tautology checker would probably input a Term, and if the term is a tautology, it would output a Theorem. And since the only ways to construct theorems are to apply the rules of kernel, you can be absolutely sure that if it does output a theorem, that theorem really is a tautology.

This is just the beginning. The design of the proof tools you build on top are something else.

Great, will start playing with it. Have to dust up the Haskell book to refresh my memory :)

VazScep wrote:Okay, so exponential_distribution is a first-class function? And one thing we're seeing here is the use of templates to parameterise a first-class function? And presumably, there is also partial specialisation going on, to tailor the implementation to doubles as opposed to some other type?

Well more or less. exponential_distribution is the template of a class, it's declaration looks something like:
Code: Select all
template<typename RealType> class exponential_dsitribution { ... }

So yes, it is parametrized to work with any of the float types.
The thing is that instances of classes can be used as functions, if you overload the function call operator '()' for them. These
kinds of classes are usually called Functors in the literature. Here is an example:
Code: Select all
struct B {
    int operator()(int a, int b) { return a + b; }
};

With this you can do:
Code: Select all
B b;
int c = b(2, 3); // c = 5

And indeed, these functors are what many people used to implement lambdas in the past before the standard supported it.
For example, here is how you would have written the example I gave using Functors instead:
Code: Select all
struct B {
        int val;
        B(int v) : val(v) {}
        ~B() { std::cout << "bye " << val << std::endl; }
};

std::function<int(int)> f(std::shared_ptr<B> b) {
        //here is different, we use Functors instead of
        //return [b](int a) -> int { return a + b->val; };
        //notice how it is much more verbose
        struct Functor {
                std::shared_ptr<B> b;
                Functor(std::shared_ptr<B> b_) : b(b_) {}
                int operator()(int a) { return a + b->val; }
        };

        Functor f(b);
        return f;
}

std::function<int(int)> g() {
        auto a = std::make_shared<B>(10), b = std::make_shared<B>(20);
        return f(a);
}
int main() {
        {
                auto foo = g();
                auto bar = std::bind(foo, 1);
                std::cout << foo(2) << ' ' << bar() << std::endl;
        }
        std::cout << "the end" << std::endl;
}

It's much more cumbersome, but you can think of it as essentially the same thing as lambdas. Indeed, this is probably how the compilers implement them internally. The differences though are that there is no "this" pointer of the lambda itself which you can access, and that the variables which are captured are declared as "const" by default.
The first point allows you to use the parents 'this' pointer seamlessly.
Example:
Code: Select all
struct Foo {
     int var;
     std::function<void()> bar() {
            return [this]() {
                 var = 10; // accessing the parents 'var' field implicitly through the this pointer
                               // it's something more cumbersome to do with Functors...
            };
     }
};

The second point can be changed by declaring the lambda mutable, like this:
Code: Select all
[b]() -> int mutable { return b++; }

With the above, only the lambda's internal copy of b changes.
So you can have lambdas with mutable state, which can sometimes be necessary.

By the way, as an aside, I have been declaring the lambdas to capture explicitly, just to be more clear what is happening,
but you can capture implicitly.
Examples:
Code: Select all
[=]( ... ) { ... } //capture all variables by value
[&]( ... ) { ... } //capture all variables by reference
[&, b]( ... ) { ... } //capture all variables by reference, except b, which is to be captured by value

And variables are only captured if either they are captured explicitly, or else there is a default capture AND they are actually used in the code.
So it is pretty safe to just capture by default.

VazScep wrote:Can you use bind with any function, or does it have to be declared in a special way? And how does this interact with the object system?

std::bind is actually declared as a template, but you don't have to specify the type manually because it can infer from the type of the target. That means that you can use it with any type which you can use the function call operator, and that means any conventional function, class methods, Functors (which are the classes that overload the function call operator, as I said above), lambdas, and most probably anything a future standard defines which satisfies the requirement.

VazScep wrote:
Can I get a virtual function of a class as a first-class function, and have it close over its instance?

I don't see why not, but I think you have to be careful there to pass the instance as a reference to bind, or else you would be passing it a copy of it's instance instead.

By the way, as another aside, I think I may have made that example more confusing by writing it as a one liner.
The below is equivalent:
Code: Select all
std::default_random_engine eng;
std::exponential_distribution<double> dist(1.0);
auto random = std::bind(dist, eng);
for (;;)
    std::cout << random() << std::endl;

When I originally wrote:
Code: Select all
auto random = std::bind(std::exponential_distribution<double>(1.0), std::default_random_engine());

I was actually constructing exponential_distribution and default_random_engine in place, so I did not have to name them.

VazScep wrote:
That is very sweet. Have you compared it much to the random generators in Haskell? It looks pretty competitive in terms of expressive power (speedwise, forget it).

Not yet, but I will.
mizvekov
 
Posts: 314
Age: 35
Male

Brazil (br)
Print view this post

Re: Programming

#43  Postby mizvekov » Mar 11, 2012 12:40 am

stalidon wrote:Oh yeah, I'm all for entertainment. Wouldn't dream of having a dev team without it. ;)

In the general 'industry' however... they want you to produce 'readable' code. That is, code that can be read by an 'analyst' that specializes in Excel macros. :naughty2:

Yes, I have experienced that one time :)

stalidon wrote:Teu nick parece russo ;)

:cheers:


Pois é, minha criatividade ilimitada com nomes :)
mizvekov
 
Posts: 314
Age: 35
Male

Brazil (br)
Print view this post

Re: Programming

#44  Postby Jumbo » Mar 11, 2012 2:00 am

stalidon wrote:
jamest wrote:
VazScep wrote:
For myself, the only heated and highly unpleasant conversations I've had in the pub with friends have been about programming. Don't bring that shit up at the dinner table.

I hadn't realised that heated debates about programming were possible. I must get into this business!


:rofl: :rofl: :rofl: :rofl: :rofl:

(Not laughing at you jamest, but to even considering that idea in my head.)

Programming, especially the programming 'industry' is plagued with religious wars about whats the best OS, programming paradigm, programming team management methodology, etc. The ironic part is, of course, it's all about what's the 'perfect toy'. There's absolutely no science in 'what's the best methodology to build a Lego tower'.

The first 'religious' war that sprang to mind was how to indent your code. Some of the styles preferred by many are hideous and unreadable or simple illogical. K&R style i find almost unreadable and its inconsistent too. Loads of people seem to love it though.

Of course everyone knows Allman style is the only sensible method but some people don't like to admit it! :whistle:
The Feynman Problem-Solving Algorithm

1. Write down the problem.
2. Think very hard.
3. Write down the answer.
User avatar
Jumbo
 
Posts: 3599
Age: 39
Male

United Kingdom (uk)
Print view this post

Re: Truth is relative: why do we criticize religion?

#45  Postby epepke » Mar 11, 2012 2:54 am

mizvekov wrote:Yes, there are a few additions to the standard, that came from boost, that will help here.
You can start by using std::shared_ptr, which is a reference counted pointer container, that will delete automatically when the count reaches 0.


A student came to the master and said, I figured out memory management! All we have to do is have a counter that is incremented when something is set to point to it and decremented when it is set elsewhere. Just delete when the reference gets to zero. The master smiled and replied

A student came to the master and said, I figured out memory management! All we have to do is have a counter that is incremented when something is set to point to it and decremented when it is set elsewhere. Just delete when the reference gets to zero. The master smiled and replied

A student came to the master and said, I figured out memory management! All we have to do is have a counter that is incremented when something is set to point to it and decremented when it is set elsewhere. Just delete when the reference gets to zero. The master smiled and replied

...


ETA: At least it's properly tail-recursive.
User avatar
epepke
 
Posts: 4080

Country: US
United States (us)
Print view this post

Re: Truth is relative: why do we criticize religion?

#46  Postby mizvekov » Mar 11, 2012 3:03 am

epepke wrote:
A student came to the master and said, I figured out memory management! All we have to do is have a counter that is incremented when something is set to point to it and decremented when it is set elsewhere. Just delete when the reference gets to zero. The master smiled and replied

Well, shared_ptr is not meant to replace garbage collection, but the new standard does allow it now, it's just not mandatory.
It's kind of moot to discuss it now anyway, because there are no available implementations yet.

You just have to be careful with how you use shared_ptr and avoid circular reference chains.
Or use another language.
mizvekov
 
Posts: 314
Age: 35
Male

Brazil (br)
Print view this post

Re: Programming

#47  Postby VazScep » Mar 11, 2012 1:26 pm

think wrote:I can see that this might be useful for a neuroscientist who has to order vast amounts of data into a comprehensible mathematical model...I just don't see what application it has for philosophy.
edit: Actually, I'm really not interested in this.
Last edited by VazScep on Mar 11, 2012 1:32 pm, edited 2 times in total.
Here we go again. First, we discover recursion.
VazScep
THREAD STARTER
 
Posts: 4590

United Kingdom (uk)
Print view this post

Ads by Google


Re: Programming

#48  Postby campermon » Mar 11, 2012 1:29 pm

If it can't be written in BASIC then it's not worth bothering with.

:coffee:
Scarlett and Ironclad wrote:Campermon,...a middle aged, middle class, Guardian reading, dad of four, knackered hippy, woolly jumper wearing wino and science teacher.
User avatar
campermon
RS Donator
 
Posts: 17032
Age: 49
Male

United Kingdom (uk)
Print view this post

Re: Programming

#49  Postby VazScep » Mar 11, 2012 2:28 pm

mizvekov wrote:So yes, it is parametrized to work with any of the float types.
The thing is that instances of classes can be used as functions, if you overload the function call operator '()' for them. These
kinds of classes are usually called Functors in the literature. Here is an example:
Code: Select all
struct B {
    int operator()(int a, int b) { return a + b; }
};

With this you can do:
Code: Select all
B b;
int c = b(2, 3); // c = 5

And indeed, these functors are what many people used to implement lambdas in the past before the standard supported it.
Yes. They effectively appear in Java as well, when you have a class with a single method such as "doIt".

There is a limited correspondence between lambdas and classes. Just as lambdas can be classes with syntax sugar, a lot of classes can be records of lambdas. For instance, in Standard ML, I could fake the implementation of a Counter class which keeps track of a value that can be reset as follows:

Code: Select all
let new_counter reset =
  let m_x      = ref reset in
  let incr ()  = m_x := !m_x + 1 in
  let reset () = m_x := reset in
  let get ()   = !m_x in
  (incr, reset, get)

let counter_incr (incr,_,_)   = incr ()
let counter_reset (_,reset,_) = reset ()
let counter_get (_,_,get)     = get ()

The ugliness of tuples and those last three functions is usually buried under the syntax sugar of records (amusingly, a number of Lisps implement records as nothing more than a bunch of macros to write exactly this sort of code).

You can even have encapsulation based on which values you include in the returned tuple: here, I leave m_x as a "private" value, since it cannot be accessed outside the function.

The only thing I think that is significantly missing from this picture is subtyping. And for this reason, Ocaml has its own object system, and supports subtyping in a manner which is definitely worth a look. The idea is that the subtyping is structural rather than nominal. In virtually all languages with subtyping, including C++, the type hierarchy is a directed graph which is explicitly coded by the programmer by associating each class with its named supertypes. In Ocaml, one class is a subtype of another just in case it supports all of the other's methods (this is sometimes called "duck-typing" by Python programmers, despite Python not having a type-system). Moreover, structural types can be anonymous using so-called "row-polymorphism." Suppose I define a function foo which takes an x and invokes its bar and baz methods as follows (# accesses instance methods):

Code: Select all
let foo x = x#bar + List.length x#baz


Then Ocaml will infer the type of foo as:

Code: Select all
val foo : < bar : int; baz : 'a list; .. > -> int = <fun>

This says that for any type 'a, foo is a function which takes some object supporting both a "bar : int" method and a "baz : 'a list" method, and returns an int. It can now be used with any instance of a class which happens to support these methods, even if there is no explicit subtyping involved.

For example, here is how you would have written the example I gave using Functors instead:
Code: Select all
struct B {
        int val;
        B(int v) : val(v) {}
        ~B() { std::cout << "bye " << val << std::endl; }
};

std::function<int(int)> f(std::shared_ptr<B> b) {
        //here is different, we use Functors instead of
        //return [b](int a) -> int { return a + b->val; };
        //notice how it is much more verbose
        struct Functor {
                std::shared_ptr<B> b;
                Functor(std::shared_ptr<B> b_) : b(b_) {}
                int operator()(int a) { return a + b->val; }
        };

        Functor f(b);
        return f;
}

std::function<int(int)> g() {
        auto a = std::make_shared<B>(10), b = std::make_shared<B>(20);
        return f(a);
}
int main() {
        {
                auto foo = g();
                auto bar = std::bind(foo, 1);
                std::cout << foo(2) << ' ' << bar() << std::endl;
        }
        std::cout << "the end" << std::endl;
}

It's much more cumbersome, but you can think of it as essentially the same thing as lambdas. Indeed, this is probably how the compilers implement them internally. The differences though are that there is no "this" pointer of the lambda itself which you can access, and that the variables which are captured are declared as "const" by default.
The first point allows you to use the parents 'this' pointer seamlessly.
Example:
Code: Select all
struct Foo {
     int var;
     std::function<void()> bar() {
            return [this]() {
                 var = 10; // accessing the parents 'var' field implicitly through the this pointer
                               // it's something more cumbersome to do with Functors...
            };
     }
};

The second point can be changed by declaring the lambda mutable, like this:
Code: Select all
[b]() -> int mutable { return b++; }

With the above, only the lambda's internal copy of b changes.
So you can have lambdas with mutable state, which can sometimes be necessary.

By the way, as an aside, I have been declaring the lambdas to capture explicitly, just to be more clear what is happening,
but you can capture implicitly.
Examples:
Code: Select all
[=]( ... ) { ... } //capture all variables by value
[&]( ... ) { ... } //capture all variables by reference
[&, b]( ... ) { ... } //capture all variables by reference, except b, which is to be captured by value

And variables are only captured if either they are captured explicitly, or else there is a default capture AND they are actually used in the code.
So it is pretty safe to just capture by default.
Okay. I hadn't actually noticed this, but yes, it'd have hoped there would be a mechanism which did the variable capture automatically.

On this matter, this is where I still feel C++ has got things right compared with nearly every other OO language: having a copy-semantics, a reference-semantics and a const keyword, so that reasoning about mutability isn't such a nightmare.

Here, I'm guessing that if you want a lambda to capture a variable that will be mutated from outside, and you want the lambda to see the update, you would generally use capture by reference? Other times, you might want the lambda to have its own copy, and you would use mutable (for primitive types?) and use capture by value otherwise?

VazScep wrote:std::bind is actually declared as a template, but you don't have to specify the type manually because it can infer from the type of the target. That means that you can use it with any type which you can use the function call operator, and that means any conventional function, class methods, Functors (which are the classes that overload the function call operator, as I said above), lambdas, and most probably anything a future standard defines which satisfies the requirement.
Okay, that's very cool.

VazScep wrote:By the way, as another aside, I think I may have made that example more confusing by writing it as a one liner.
Yes, my C++ knowledge is very rusty. I'd even forgotten entirely about initialisation lists, but suddenly remembered them when I saw the code!
Here we go again. First, we discover recursion.
VazScep
THREAD STARTER
 
Posts: 4590

United Kingdom (uk)
Print view this post

Re: Programming

#50  Postby mizvekov » Mar 11, 2012 8:15 pm

VazScep wrote:Yes. They effectively appear in Java as well, when you have a class with a single method such as "doIt".

There is a limited correspondence between lambdas and classes. Just as lambdas can be classes with syntax sugar, a lot of classes can be records of lambdas. For instance, in Standard ML, I could fake the implementation of a Counter class which keeps track of a value that can be reset as follows:

Code: Select all
let new_counter reset =
  let m_x      = ref reset in
  let incr ()  = m_x := !m_x + 1 in
  let reset () = m_x := reset in
  let get ()   = !m_x in
  (incr, reset, get)

let counter_incr (incr,_,_)   = incr ()
let counter_reset (_,reset,_) = reset ()
let counter_get (_,_,get)     = get ()

The ugliness of tuples and those last three functions is usually buried under the syntax sugar of records (amusingly, a number of Lisps implement records as nothing more than a bunch of macros to write exactly this sort of code).

You can even have encapsulation based on which values you include in the returned tuple: here, I leave m_x as a "private" value, since it cannot be accessed outside the function.

Cool. c++ has tuples now, with std::tuples. But they are "disconnected" from ordinary struct types which have basically the same form. boost Fusion is something that proposes to fix this, so you can declare an ordinary class and "map" it into an equivalent tuple, so you can mix the two forms freely.

VazScep wrote:
The only thing I think that is significantly missing from this picture is subtyping. And for this reason, Ocaml has its own object system, and supports subtyping in a manner which is definitely worth a look. The idea is that the subtyping is structural rather than nominal. In virtually all languages with subtyping, including C++, the type hierarchy is a directed graph which is explicitly coded by the programmer by associating each class with its named supertypes. In Ocaml, one class is a subtype of another just in case it supports all of the other's methods (this is sometimes called "duck-typing" by Python programmers, despite Python not having a type-system). Moreover, structural types can be anonymous using so-called "row-polymorphism."

Again, very cool. I suppose you have duck typing in C++ when you use template parameters. But there is no way to specify up front what characteristics a type must have to be acceptable, so it stays implicit in the code. When you pass the wrong type, the error happens inside the template definition, sometimes many layers deep, instead of at the point of instantiation.

VazScep wrote:
Suppose I define a function foo which takes an x and invokes its bar and baz methods as follows (# accesses instance methods):

Code: Select all
let foo x = x#bar + List.length x#baz


Then Ocaml will infer the type of foo as:

Code: Select all
val foo : < bar : int; baz : 'a list; .. > -> int = <fun>

This says that for any type 'a, foo is a function which takes some object supporting both a "bar : int" method and a "baz : 'a list" method, and returns an int. It can now be used with any instance of a class which happens to support these methods, even if there is no explicit subtyping involved.

That's a capability missing from the c++ new standard. One of the reasons it's infamous for it's incomprehensible error messages is because, like I said, there is no way to specify restrictions on the types that can be passed as template parameters. So usually when you pass a nonsensical type, the error happens in implementation dependent code, sometimes many layers deep.
A solution to this almost passed on c++11, something called concepts, but it was pulled out in the last minute (figuratively) because one issue popped up and they did not want it to delay the whole thing. I think it will be up again in five years.

VazScep wrote:Okay. I hadn't actually noticed this, but yes, it'd have hoped there would be a mechanism which did the variable capture automatically.

On this matter, this is where I still feel C++ has got things right compared with nearly every other OO language: having a copy-semantics, a reference-semantics and a const keyword, so that reasoning about mutability isn't such a nightmare.

Yes. The new standard also defines a move semantics, and so you can declare a special kind of constructor, called a move constructor, which can "rob" another instance's internal data instead of copying it as in a copy constructor. That way it can make it pretty inexpensive to return complex data types, like std::vector, by value, where in the old standard there would be a superfluous copy of allocated data followed by the original being deleted.

VazScep wrote:
Here, I'm guessing that if you want a lambda to capture a variable that will be mutated from outside, and you want the lambda to see the update, you would generally use capture by reference?

Yes.
VazScep wrote:
Other times, you might want the lambda to have its own copy, and you would use mutable (for primitive types?) and use capture by value otherwise?

Well capture by value/reference and mutable are orthogonal things. If you want the lambda to have it's own copy AND want to be able to modify it, then you would capture by value and declare the lambda as mutable. Likewise, if you want to see external modifications to that variable, AND also be able to modify that external variable, you would capture by reference and declare the lambda mutable.

In short, if you want to see external updates, you capture by reference, otherwise by value.
If you want to be able to modify that captured reference / value, then you declare it mutable.

VazScep wrote:Yes, my C++ knowledge is very rusty. I'd even forgotten entirely about initialisation lists, but suddenly remembered them when I saw the code!

Heh, same thing about Haskell here. I had learned some of it about a year ago, but had no activity with it, so something started fading away from my memory already.

By the way, I have been playing with that example you sent me, but I am a bit confused about somethings.

Firstly, what are called "axioms" in the code are actually axiom schemes, right?
So for example, I can use that code to do things like this:
Code: Select all
*Propositional> let t1 = inst theoremTerm (Theorem (Var axiom1 :=>: Var axiom3))
*Propositional> let t2 = inst theoremTerm (Theorem (Var axiom1))
*Propositional> mp t1 t2
[Theorem ((Not (Var "p") :=>: Not (Var "q")) :=>: (Var "q" :=>: Var "p"))]

Which is alright and dandy, unless I am using that thing not the intended way, but I can't actually see how that can be used to represent a proof of a propositional logic theorem.
mizvekov
 
Posts: 314
Age: 35
Male

Brazil (br)
Print view this post

Re: Programming

#51  Postby epepke » Mar 11, 2012 8:47 pm

campermon wrote:If it can't be written in BASIC then it's not worth bothering with.

:coffee:


Real programmers develop for Turing machines. And none of this three, four, five color crap. Gimme some tapes, a 1 and a 0, and some patch cords.
User avatar
epepke
 
Posts: 4080

Country: US
United States (us)
Print view this post

Re: Programming

#52  Postby campermon » Mar 11, 2012 8:56 pm

epepke wrote:
campermon wrote:If it can't be written in BASIC then it's not worth bothering with.

:coffee:


Real programmers develop for Turing machines. And none of this three, four, five color crap. Gimme some tapes, a 1 and a 0, and some patch cords.


:lol: :lol: :lol: :lol:
Scarlett and Ironclad wrote:Campermon,...a middle aged, middle class, Guardian reading, dad of four, knackered hippy, woolly jumper wearing wino and science teacher.
User avatar
campermon
RS Donator
 
Posts: 17032
Age: 49
Male

United Kingdom (uk)
Print view this post

Re: Programming

#53  Postby mizvekov » Mar 11, 2012 9:00 pm

Real programmers use none of this patch cord crap. Give me a magnetized needle, some core memory, and a cup of tea please.
mizvekov
 
Posts: 314
Age: 35
Male

Brazil (br)
Print view this post

Re: Programming

#54  Postby think » Mar 11, 2012 10:56 pm

VazScep wrote:
think wrote:I can see that this might be useful for a neuroscientist who has to order vast amounts of data into a comprehensible mathematical model...I just don't see what application it has for philosophy.
edit: Actually, I'm really not interested in this.


me either, i was just wondering why this thread is under the philosophy section.
think
 
Posts: 628

Print view this post

Re: Programming

#55  Postby VazScep » Mar 11, 2012 11:05 pm

mizvekov wrote:Cool. c++ has tuples now, with std::tuples. But they are "disconnected" from ordinary struct types which have basically the same form. boost Fusion is something that proposes to fix this, so you can declare an ordinary class and "map" it into an equivalent tuple, so you can mix the two forms freely.
Ah, that's interesting. I'm not sure whether it's a good idea to allow records and tuples to intermix like that. It breaks the abstraction. But then, it makes sense if you seriously want to regard records as just glorified tuples.

Personally, I like it. It's very much in the Haskell way of doing things, where a record with n fields is just sugar for a data constructor with n arguments.

VazScep wrote:Again, very cool. I suppose you have duck typing in C++ when you use template parameters. But there is no way to specify up front what characteristics a type must have to be acceptable, so it stays implicit in the code. When you pass the wrong type, the error happens inside the template definition, sometimes many layers deep, instead of at the point of instantiation.
It should be more powerful than C++ templates, much as simple polymorphism is (in some respects) more powerful than C++ templates. In particular, the binding is not resolved at compile-time. Thus, it is possible to load a native code library with a new set of class definitions at runtime, and have them work seamlessly with your existing duck-typed functions.

VazScep wrote:That's a capability missing from the c++ new standard. One of the reasons it's infamous for it's incomprehensible error messages is because, like I said, there is no way to specify restrictions on the types that can be passed as template parameters. So usually when you pass a nonsensical type, the error happens in implementation dependent code, sometimes many layers deep.
Is it also the case that templates can make inconsistent assumptions about each other? So you define a template which instantiates another in a way that is always going to lead to a type-error, and yet this is only known when you actually go ahead and instantiate that first template?

I'd say the difference between C++ templates and true polymorphism is that one is a schematic quantifier while the other is semantic quantifier. That's why, in C++, you can have an explosion in the number of classes and functions produced by templates, while things remain constant in Ocaml (at the expense of needing a hashtable to look up methods rather than a vtable).

I think there's a lot more to say about schematic versus semantic uses of quantifiers generally. God forbid it ever gets philosophical.

Yes. The new standard also defines a move semantics, and so you can declare a special kind of constructor, called a move constructor, which can "rob" another instance's internal data instead of copying it as in a copy constructor. That way it can make it pretty inexpensive to return complex data types, like std::vector, by value, where in the old standard there would be a superfluous copy of allocated data followed by the original being deleted.
Ah. Where would that be used? Presumably, you'd need guarantees that the original data wasn't being held by someone else?

VazScep wrote:Well capture by value/reference and mutable are orthogonal things. If you want the lambda to have it's own copy AND want to be able to modify it, then you would capture by value and declare the lambda as mutable. Likewise, if you want to see external modifications to that variable, AND also be able to modify that external variable, you would capture by reference and declare the lambda mutable.
I don't understand. Even if you don't declare the lambda mutable, what would happen if the lambda calls a destructive member function on the captured variable? Or is this more about assignment?

By the way, I have been playing with that example you sent me, but I am a bit confused about somethings.
Hey awesome! Glad you gave it a try!

Firstly, what are called "axioms" in the code are actually axiom schemes, right?
Yes. Though actually, in this code, everything is a scheme, axioms and theorems included.

In textbook presentations of this system, only axioms are schemes. However, you quickly realise as you go through the subsequent proof theory that everything is treated schematically: axioms, theorems and even the proofs. Technically, the proof that "P => P" is a completely different object to the proof that "Q => Q". But if you stick to this in an implementation, you're going to just see a huge explosion in the number of proof objects that you need to achieve anything (pretty much the exact situation you have with templates).

The alternative is to provide a new inference rule "inst" which allows you to instantiate any axiom and any theorem, by replacing variables systematically with arbitrary terms.

I have, however, made things perhaps a little bit too abstract. Normally, you think of variables in propositional logic as just strings, but they can be drawn from any set. And so, in this implementation, I've allowed variables to be drawn from any type. Simple string based variables are things like

Code: Select all
Var "p"


but you could just as easily have numerical variables as with

Code: Select all
Var 10
.

So for example, I can use that code to do things like this:
Code: Select all
*Propositional> let t1 = inst theoremTerm (Theorem (Var axiom1 :=>: Var axiom3))
*Propositional> let t2 = inst theoremTerm (Theorem (Var axiom1))
Ah, this is just a convenient trick of ghci. You've actually violated the encapsulation of the logical kernel. Save the code I provided you as a file "Propositional.hs". Then start a new file beginning with

Code: Select all
import Propositional


and load that. You'll now find that you cannot use "Theorem" as a data constructor.

The three axioms provided are schematic, and their variables are represented by strings. To instantiate the axioms, you supply a function which maps those strings to new terms. So you could, for instance, write

Code: Select all
> inst (\v -> if v == "p" then Var "q" else Var "r") axiom1
Theorem (Var "q" :=>: (Var "r" :=>: Var "q"))

For convenience, I have a set of utility functions here:
Code: Select all
module Utils where

import Propositional

import Data.List
import Data.Maybe (mapMaybe)

p = Var "p"
q = Var "q"
r = Var "r"

-- Derived syntax
p \/ q  = Not p :=>: q
p /\ q  = Not (p :=>: Not q)
p <=> q = (p :=>: q) /\ (q :=>: p)
t  = p :=>: p
f  = Not t
       
printTerm :: Term String -> String
printTerm (Var p)             = p
printTerm (Not (Var p))       = "~" ++ p
printTerm (Not (Not p))       = "~" ++ printTerm (Not p)
printTerm (Not p)             = "~(" ++ printTerm p ++ ")"
printTerm ((p :=>: q) :=>: r) = "(" ++ printTerm (p :=>: q) ++ ")"
                               ++ " ==> " ++ printTerm r
printTerm (p :=>: q)          = printTerm p ++ " ==> " ++ printTerm q

printThm thm = "|- " ++ printTerm (theoremTerm thm)

-- Retrieve the variable names in a term
vars :: Eq a => Term a -> [a]
vars (Var p)    = [p]
vars (Not p)    = vars p
vars (p :=>: q) = nub $ vars p ++ vars q
       
-- Instantiate variables in a term using a lookup
instTermM :: (Eq a) => (a -> Term b) -> [(a, Term b)] -> Term a -> Term b
instTermM dflt l = instTerm f
  where f p = maybe (dflt p) id (lookup p l)
       
-- Instantiate variables in a theorem using a lookup
instM :: (Eq a) => (a -> Term b) -> [(a, Term b)] -> Theorem a -> Theorem b
instM dflt l = inst f
  where f p = maybe (dflt p) id (lookup p l)
       
-- Instantiate variables in the order that they appear in a depth-first traversal
instL :: Eq a => [Term a] -> Theorem a -> Theorem a
instL l p = instM Var (zip (vars (theoremTerm p)) l) p
       
-- Append primes to ensure that the second argument does not appear in the first
avoid :: [String] -> String -> String
avoid ps p = p ++ replicate maxPrimes '\''
  where suffixes  = mapMaybe (stripPrefix p) ps
        maxPrimes = (maximum $ 0 : map primes suffixes)
        primes cs =
          case span (== '\'') cs of
            (ps,"") -> length ps + 1
            _       -> 0
           
-- Return an instantiation after matching two terms
match :: (Eq a, Eq b) => Term a -> Term b -> [[(a, Term b)]]
match = m []
  where m is (Var p) t =
          case lookup p is of
            Nothing -> [ (p, t) : is ]
            Just t' -> [ is | t == t' ]
        m is (Not t) (Not t')        = m is t t'
        m is (a :=>: c) (a' :=>: c') = concat [ m is' c c' | is' <- m is a a' ]
        m _ _ _                      = []


With these, you can now more simply write:

Code: Select all
> instL [q, p :=>: p] axiom1
Theorem (Var "q" :=>: ((Var "p" :=>: Var "p") :=>: Var "q"))


Which is alright and dandy, unless I am using that thing not the intended way, but I can't actually see how that can be used to represent a proof of a propositional logic theorem.
It's not obvious, don't worry. Here's a simple proof (actually the first proof from Mendelson's Mathematical Logic), showing that P => P:

Code: Select all
pp = do
  step1 <- mp (instL [p,p:=>:p,p] axiom2) (instL [p,p:=>:p] axiom1)
  return $ mp step1 (instL [p,p] axiom1)
Here we go again. First, we discover recursion.
VazScep
THREAD STARTER
 
Posts: 4590

United Kingdom (uk)
Print view this post

Ads by Google


Re: Programming

#56  Postby VazScep » Mar 11, 2012 11:06 pm

mizvekov wrote:Real programmers use none of this patch cord crap. Give me a magnetized needle, some core memory, and a cup of tea please.
There's an emacs function for that.
Here we go again. First, we discover recursion.
VazScep
THREAD STARTER
 
Posts: 4590

United Kingdom (uk)
Print view this post

Re: Programming

#57  Postby epepke » Mar 12, 2012 1:30 am

mizvekov wrote:Real programmers use none of this patch cord crap. Give me a magnetized needle, some core memory, and a cup of tea please.


Hah! New-fangled magnetics! We had to do our computing with falling dominoes!

(Actually, I did. OR gates are easy, but AND and NOT gates are doable.)
User avatar
epepke
 
Posts: 4080

Country: US
United States (us)
Print view this post

Re: Programming

#58  Postby mizvekov » Mar 12, 2012 5:29 am

VazScep wrote:Ah, that's interesting. I'm not sure whether it's a good idea to allow records and tuples to intermix like that. It breaks the abstraction. But then, it makes sense if you seriously want to regard records as just glorified tuples.

Well, but anyway, nothing can stop you from inheriting from a tuple and then providing your custom set/get methods.

VazScep wrote:Personally, I like it. It's very much in the Haskell way of doing things, where a record with n fields is just sugar for a data constructor with n arguments.

Yes. I came to use Fusion because of another boost lib, called Spirit, which is basically a Scanner / Parser library.
With it, you can very seamlessly create a parser from a BNF grammar, and have it generate an AST from the input with very little effort. It will normally build this AST from stl containers (or their boost equivalents) and tuples. And here Fusion helps because then you can use your own custom structs instead of tuples.

VazScep wrote:
Is it also the case that templates can make inconsistent assumptions about each other? So you define a template which instantiates another in a way that is always going to lead to a type-error, and yet this is only known when you actually go ahead and instantiate that first template?

Yes :(

VazScep wrote:
I'd say the difference between C++ templates and true polymorphism is that one is a schematic quantifier while the other is semantic quantifier. That's why, in C++, you can have an explosion in the number of classes and functions produced by templates, while things remain constant in Ocaml (at the expense of needing a hashtable to look up methods rather than a vtable).

Yeah I agree.

VazScep wrote:Ah. Where would that be used? Presumably, you'd need guarantees that the original data wasn't being held by someone else?

Well it does get used implicitly in the cases where this would have no user visible effects, such as any assignment which would obliterate the original value anyway.
Or it can be used explicitly with std::move. In this case, you would be exposed to the original 'moved from' object.
There isn't much in the way of guarantees of in what state this object will be, except that they must be in a destructible state, ie the destructor will not segfault when it is called. Everything else is up to whoever implements the class to define what happens. But the STL classes are guaranteed by the standard to be left in sensible states, so for example a moved vector will be left in an empty state.
So as it is up to whoever implements it to define, you as a user at least will have the guarantee that the 'moved to' object will be in the intended state. You can assume that whoever wrote the class you are using either did not provide a move constructor or provided one which works.

VazScep wrote:I don't understand. Even if you don't declare the lambda mutable, what would happen if the lambda calls a destructive member function on the captured variable? Or is this more about assignment?

Well that depends on what you mean by "destructive member".
You can easily see what happens in the lambda case if you think about it's equivalent Functor.
So for example if you have this lambda:
Code: Select all
[b]() { b.destructive_method(); }

It's equivalent Functor will be something like:
Code: Select all
struct Lambda {
    const B b;
    Lambda(B &b_) : b(b_) {}
    void operator()() { b.destructive_method(); }
};

Whereas if you had:
Code: Select all
[b]() mutable { b.destructive_method(); }

The equivalent one is:
Code: Select all
struct Lambda {
    B b;
    Lambda(B &b_) : b(b_) {}
    void operator()() { b.destructive_method(); }
};

Notice how the constness of b differs between these two.

Now, if you had B defined as something like this:
Code: Select all
struct B {
      int val;
      void b.destructive_method() { val = 0; }
};


The first lambda example and it's equivalent Functor would both fail to compile, because destructive_method is not a "const" method.
What happens here is basically that methods in a class take a hidden parameter that is passed implicitly, called the 'this' parameter.
It is as if you had the below in C:
Code: Select all
void destructive_method(struct B *this) { this->val = 0; }

And when you try to call it by passing a const pointer to the 'this' parameter, it fails to compile because the parameters type cannot be implicitly casted.
You can fix it by doing the below instead:
Code: Select all
void b.destructive_method() const { val = 0; }

which is equivalent to:
Code: Select all
void destructive_method(const struct B *this) { this->val = 0; }

And then it will continue compilation, but fail again on "val = 0", because it will try to do "this->val = 0;" with 'this' being a const pointer, which is not allowed.
But the question then becomes, can this method be destructive even if it's pointer to it's own instance is const?
The answer is absofuckinglutelly yes, that method still has access to global data, which can be and is usually declared as non-const, it can still call OS APIs and such.
And it can even do something evil like:
Code: Select all
void b.destructive_method() const { const_cast<B*>(this)->val = 0; }

So anyway, it is up to whoever writes the class to decide what happens, and if that person wants to be a raving lunatic, the type system will not stop him.
But in general, no sane library will do something like this.

VazScep wrote:Hey awesome! Glad you gave it a try!

Ofcourse I did!

VazScep wrote:
Firstly, what are called "axioms" in the code are actually axiom schemes, right?
Yes. Though actually, in this code, everything is a scheme, axioms and theorems included.

Ah ok.
VazScep wrote:
In textbook presentations of this system, only axioms are schemes. However, you quickly realise as you go through the subsequent proof theory that everything is treated schematically: axioms, theorems and even the proofs. Technically, the proof that "P => P" is a completely different object to the proof that "Q => Q". But if you stick to this in an implementation, you're going to just see a huge explosion in the number of proof objects that you need to achieve anything (pretty much the exact situation you have with templates).

Yeah I see it now :grin:

VazScep wrote:
The alternative is to provide a new inference rule "inst" which allows you to instantiate any axiom and any theorem, by replacing variables systematically with arbitrary terms.

Yeah that's what confused me, I did not realize at the time that the parameter f was supposed to be a function which maps the original "p", "q" or "'r" to your own designators. Now it's starting to be clear how this can be used.

VazScep wrote:
I have, however, made things perhaps a little bit too abstract. Normally, you think of variables in propositional logic as just strings, but they can be drawn from any set. And so, in this implementation, I've allowed variables to be drawn from any type. Simple string based variables are things like

Code: Select all
Var "p"


but you could just as easily have numerical variables as with

Code: Select all
Var 10
.

Yeah I see, but that wasn't really a problem, the big conceptual barrier I think was the 'f' parameter like I said.

VazScep wrote:Ah, this is just a convenient trick of ghci. You've actually violated the encapsulation of the logical kernel. Save the code I provided you as a file "Propositional.hs". Then start a new file beginning with

Code: Select all
import Propositional


and load that. You'll now find that you cannot use "Theorem" as a data constructor.

Oh yes :doh:

VazScep wrote:
The three axioms provided are schematic, and their variables are represented by strings. To instantiate the axioms, you supply a function which maps those strings to new terms. So you could, for instance, write

Code: Select all
> inst (\v -> if v == "p" then Var "q" else Var "r") axiom1
Theorem (Var "q" :=>: (Var "r" :=>: Var "q"))

For convenience, I have a set of utility functions here:
Code: Select all
module Utils where

import Propositional

import Data.List
import Data.Maybe (mapMaybe)

p = Var "p"
q = Var "q"
r = Var "r"

-- Derived syntax
p \/ q  = Not p :=>: q
p /\ q  = Not (p :=>: Not q)
p <=> q = (p :=>: q) /\ (q :=>: p)
t  = p :=>: p
f  = Not t
       
printTerm :: Term String -> String
printTerm (Var p)             = p
printTerm (Not (Var p))       = "~" ++ p
printTerm (Not (Not p))       = "~" ++ printTerm (Not p)
printTerm (Not p)             = "~(" ++ printTerm p ++ ")"
printTerm ((p :=>: q) :=>: r) = "(" ++ printTerm (p :=>: q) ++ ")"
                               ++ " ==> " ++ printTerm r
printTerm (p :=>: q)          = printTerm p ++ " ==> " ++ printTerm q

printThm thm = "|- " ++ printTerm (theoremTerm thm)

-- Retrieve the variable names in a term
vars :: Eq a => Term a -> [a]
vars (Var p)    = [p]
vars (Not p)    = vars p
vars (p :=>: q) = nub $ vars p ++ vars q
       
-- Instantiate variables in a term using a lookup
instTermM :: (Eq a) => (a -> Term b) -> [(a, Term b)] -> Term a -> Term b
instTermM dflt l = instTerm f
  where f p = maybe (dflt p) id (lookup p l)
       
-- Instantiate variables in a theorem using a lookup
instM :: (Eq a) => (a -> Term b) -> [(a, Term b)] -> Theorem a -> Theorem b
instM dflt l = inst f
  where f p = maybe (dflt p) id (lookup p l)
       
-- Instantiate variables in the order that they appear in a depth-first traversal
instL :: Eq a => [Term a] -> Theorem a -> Theorem a
instL l p = instM Var (zip (vars (theoremTerm p)) l) p
       
-- Append primes to ensure that the second argument does not appear in the first
avoid :: [String] -> String -> String
avoid ps p = p ++ replicate maxPrimes '\''
  where suffixes  = mapMaybe (stripPrefix p) ps
        maxPrimes = (maximum $ 0 : map primes suffixes)
        primes cs =
          case span (== '\'') cs of
            (ps,"") -> length ps + 1
            _       -> 0
           
-- Return an instantiation after matching two terms
match :: (Eq a, Eq b) => Term a -> Term b -> [[(a, Term b)]]
match = m []
  where m is (Var p) t =
          case lookup p is of
            Nothing -> [ (p, t) : is ]
            Just t' -> [ is | t == t' ]
        m is (Not t) (Not t')        = m is t t'
        m is (a :=>: c) (a' :=>: c') = concat [ m is' c c' | is' <- m is a a' ]
        m _ _ _                      = []


With these, you can now more simply write:

Code: Select all
> instL [q, p :=>: p] axiom1
Theorem (Var "q" :=>: ((Var "p" :=>: Var "p") :=>: Var "q"))


Okay great, I will try it.

VazScep wrote:It's not obvious, don't worry. Here's a simple proof (actually the first proof from Mendelson's Mathematical Logic), showing that P => P:

Code: Select all
pp = do
  step1 <- mp (instL [p,p:=>:p,p] axiom2) (instL [p,p:=>:p] axiom1)
  return $ mp step1 (instL [p,p] axiom1)

Okay great, much better, anyway thank you again, I will try playing with it a little.

VazScep wrote:
mizvekov wrote:Real programmers use none of this patch cord crap. Give me a magnetized needle, some core memory, and a cup of tea please.
There's an emacs function for that.

Actually, there is an emacs function which can implement any function you can dream of. The only problem with it though is that you have to specify what you want it to do in LISP :crazy:
epepke wrote:Hah! New-fangled magnetics! We had to do our computing with falling dominoes!

(Actually, I did. OR gates are easy, but AND and NOT gates are doable.)

Hah, well If you can implement the NOT and the OR, you can implement the AND as ~(~a | ~b)
But I imagine the biggest problem with such a machine is getting it synchronized.
Or can you implement a 'clock' for it? :crazy:
mizvekov
 
Posts: 314
Age: 35
Male

Brazil (br)
Print view this post

Re: Programming

#59  Postby advaitya » Mar 12, 2012 7:56 am

stalidon wrote:
advaitya wrote:I fail to see the philosophy in this thread? Ought to be shoved in some other appropriate forum.


Just wait, the philosophy is coming. From Wiki: "Lambda calculus has applications in many different areas in mathematics, philosophy, and computer science."


The same way language is necessary for philosophy. But we discuss language elsewhere. Linguistics forum for example.

A more pointless thread i have not seen. Surprised it hasn't been rubbished to some appropriate forum still.
advaitya
 
Posts: 323
Age: 36
Male

Country: India
India (in)
Print view this post

Re: Programming

#60  Postby Cito di Pense » Mar 12, 2012 1:01 pm

mizvekov wrote:
VazScep wrote:Interesting. I thought that GTK was the C library and Qt very much the modern C++ library. I suppose they've been around long enough to both be "outdated" in their own ways.

Yes, it is pretty outdated in terms of the c++ feature set it uses. And again, Qt needs a precompiler on top of it all, so you have to keep it in mind that it is even cheating :)

VazScep wrote:
Though since using languages outside the C family, I've come to appreciate technologies like GTK and OpenGL, which use such simple C code that they become almost trivial to bind to from other languages.

Yes, some languages do indeed make it very straightforward to bind to C, something that can't be done well with c++ due to the complex and compiler dependent abi. Don't think that is going to change in the next 20 years.


I have hope again! Qt makes me feel dirty. Somebody owns it, which is the way it is for so much c++. Nobody pwns it.
Хлопнут без некролога. -- Серге́й Па́влович Королёв

Translation by Elbert Hubbard: Do not take life too seriously. You're not going to get out of it alive.
User avatar
Cito di Pense
 
Name: Ivar Poäng
Posts: 27752
Age: 22
Male

Country: The Heartland
Mongolia (mn)
Print view this post

PreviousNext

Return to Philosophy

Who is online

Users viewing this topic: No registered users and 1 guest