Posted: Apr 28, 2017 7:46 pm
by VazScep
crank wrote:Side note: I've been watching a lot of GreshamCollege videos, mostly Prof. Raymond Flood's maths lectures [v. good if you like that sort of thing], there's quite a few of them. And I came across this one, it's fascinating and this guy, not Flood, gets into formal systems programming a fair amount. If you haven't seen, you'd probably find it interesting.
Will check it out. Thanks for the link!

They already have lots of genetic-algorithm programming results that they don't have a clue how the results were arrived at. This will likely become typical.
It might. If the spec is tight, then you don't care. If it isn't, then maybe you want to insist that the algorithm also provides a human checkable certificate as to how it came up with the solution. My money is on doing the former. I welcome the day that we don't care how you arrived at the solution, so long as you solve it, whether you're an evil AI or an evil human trying to make a quick buck.

The article goes badly astray in his 4th and 5th points as I see it. In 4, he says:
It stands to reason that reason itself is finite, and not infinite. So the question is, where is the limit of intelligence? We tend to believe that the limit is way beyond us, way “above” us, as we are “above” an ant. Setting aside the recurring problem of a single dimension, what evidence do we have that the limit is not us? Why can’t we be at the maximum? Or maybe the limits are only a short distance away from us? Why do we believe that intelligence is something that can continue to expand forever?

To me this lacks almost any imagination at all. We can see ways to vastly increase our own abilities, if only with speed, which he even discussed earlier. A vastly increased memory along with more circuits so we can process enough info to hold more of it in our mental picture at any one moment, probably would also vastly increase our pattern recognition/connections of information that seem to be what problem solving/innovating is. He seems stuck on the 'infinite' bit, there's a lot of room between 'not infinite' and 'infinite', a whole lot of room.
It's not beyond the realms of reason. We know we can't just increase speed/connections/memory for free. We've already hit thermal limits with speed. Eventually, you have so many parallel CPUs, spread over so much space, that the amount of time they are waiting on inter-CPU communication means you might as well have just used a single CPU. Since the 90s, the stupid facts about the speed-of-light and memory access times have put stupid constraints on how we program, even as Moore's law was claiming to march us all forward.

There are major physical limits to scaling computation which we've known for a long time. I doubt the human brain has found them all, but they'll be there.

In his 5th point, he says:
First, simulations and models can only be faster than their subjects because they leave something out.


Now, this is just plain wrong. I don't know what he was thinking, or maybe I'm badly misconstruing what he means, but it's trivial to come up with examples that disprove what he is saying. Maybe the easiest ones are all the old game emulators that have to be vastly slowed down when run on today's basic desktops. There are all kinds of systems that run slowly where simulation can go vastly faster without leaving anything of significance out. One of my predictions for the far off future is that whatever minds there are of super-intelligence will have little use for the real world, the simulations they'll be capable of will be far richer and more interesting, and less dangerous.
Yes, you're absolutely correct here. The only way I can be charitable to the author is based on my experience arguing this stuff with others: the argument from my side goes that computationalism is wrong. Human intelligence is not a CPU specification that you can emulate however you want (computationalism/functionalism). Human intelligence is very wet biology.

Okay, say your opponents: but you can simulate anything wet. You have to pick a level of granularity and run a small time-delta, but as you push these, you get as close as you like to the simulated phenomenon.

But a simulation is very different to an emulation. You can emulate to 100% accuracy. You can't generally simulate with 100% accuracy. In fact, even for stupidly simple physical systems such as three bodies influencing each other non-trivially under gravity, simulation goes off course pretty quick.

One last bit. He threw in there one line about a type of intelligence that hits upon something I think is extremely significant, the implications of which I haven't heard anyone mention that I can remember. it was this line:
A mind with operational access to its source code, so it can routinely mess with its own processes.


How marvelous it would be to have access to one's own programming, able to modify to suit your needs and desires. The only problem is, which desires? What will anything mean when you can change such things by twiddling a couple of knobs in your brain? What will it really mean then to look someone in the eyes and say 'I love you'?
I think it's a common trope in AI speculation to wonder about this. Have you heard of the "wirehead" problem? It's the reinforcement feedback AI which figures out how its reward function is embedded in the real world, takes control over it, and then stimulates itself indefinitely by having it send maximum reward forever. It's the same situation as someone who figures out that they'd be pretty happy dosed up forever with morphine. I haven't actually read any standard solutions to this problem, but I assume they are the same ones that would be used for a machine that can reprogram itself.