Posted: Mar 22, 2012 10:44 am
by VazScep
mizvekov wrote:
VazScep wrote:@[color=#CC0000][b][color=#CC0000][b]mizvekov[/b][/color][/b][/color] Do you have any thoughts about "concepts" in C++?

Well I mentioned them briefly a few posts back. :grin:
Whoops! Sorry.

Now, their purpose is to specify constraints on the types templates can accept. Currently, the constraints are specified by the code itself, and failure to match happens by compilation error at the offending piece. This leads to the infamous longish (to put it mildly) compiler error messages when you pass a nonsensical type to a template. :yuk:

Now concepts allow you to specify the constraints of a type beforehand. What kind of operations it supports, what functions can you call it with, which members it has, and others. So the next time you try to instantiate a template with an incompatible type, the error will be much more comprehensible, specific and hopefully smaller.
Right. Sounds cool to me.

It's related to type-classes it seems, except that concepts can both be instantiated explicitly using concept maps, and they can be left implicit, which is a nice property.

There are interesting tradeoffs in the C++ approach to this stuff. Can concept requirements be inferred? So, if I write (I'm rusty, so I'm not sure if this is the right syntax):
Code: Select all
template<typename T> requires FooConcept
T foo() {
}

template<typename T>
T bar() {
   foo();
}
Can the FooConcept requirement on bar be inferred?

Anyway, the fact that C++ templates are required to be fully expanded gives you quite a bit of expressive power over Haskell here. Do you have any thoughts on this? Full template expansion, in the worst case, is going to lead to code bloat, and longer compilation times. But you gain because the compiler can make much more aggressive optimisations.

The tradeoff is still debated in the functional world. Ocaml doesn't fully expand calls to polymorphic functions at compile time, because they want to allow separate compilation of modules. However, the Standard ML compiler MLton does fully expand (it's called "monomorphisation"), and they seem to think that the increased build times and code size isn't too big a deal. Then there's F#, which has separate compilation at the level of bytecode, and then monomorphises in JIT compilation.

Now it also allows for some optimization opportunities, since you can also use concepts to specify specializations for templates.
For example, you could define a special implementation scheme of a generic class that makes use of some special feature of one of the possible type parameters, and also later define a more general implementation scheme which works with any types.

Now one feature that I find *very* cool about all of this is that the concept proposal also includes something else which I didn't mention, something called axioms.
You can use them to further specify some semantic properties of concepts. For example, you could use it to specify that some operation is commutative:
Code: Select all
axiom Commutativity(Op op, T x, T y)  {  op(x, y) == op(y, x);  }

And this allows for some kinds of optimizations which were just not allowed under any circumstances before, as per the standard.
That's cool. I take it that the compiler doesn't try to prove this sort of thing? It's just there as a context sensitive rewrite rule?

This ability to state formal properties of one's code is, of course, something I absolutely love. You should be able to carve out whole hierarchies of algebras based on these concepts. And maybe we can start making these axioms more expressive and shoving in some formal verification and theorem proving :whistle: