[I'm getting in this rather late, I've read quite a few pages of posts, but no where near all, did some searching, so please forgive if I'm repeating stuff, I know a lot of this isn't in there. And it's up in the TLDR zone, but I felt a real urge to get this out there.]
Turing didn't really think all that much of his Imitation Game. In the 1950 paper he says “The original question, "Can machines think?" I believe to be too meaningless to deserve discussion.” Chomsky illustrates why by saying ask an English speaker if airplanes fly, and they'll look at you funny and say of course, then ask them if submarines swim, and you're likely to get a blank, confused look. It's the exact same situation, but English hasn't gone that route, other languages have. As someone else said, does it matter if they can think and are conscious if they turn all of us meat sacks into paperclips?
There is a paper on
aeos by David Deutsch,
The very laws of physics imply that artificial intelligence must be possible. What’s holding us up?, not someone to write off cavalierly, where he wrote:
Despite this long record of failure, AGI must be possible. And that is because of a deep property of the laws of physics, namely the universality of computation. This entails that everything that the laws of physics require a physical object to do can, in principle, be emulated in arbitrarily fine detail by some program on a general-purpose computer, provided it is given enough time and memory. The first people to guess this and to grapple with its ramifications were the 19th-century mathematician Charles Babbage and his assistant Ada, Countess of Lovelace. It remained a guess until the 1980s, when I proved it using the quantum theory of computation.
There's a good podcast, a
Scientia Salon video chat between Dan Kaufman and Massimo Pigliucci on
Strong Artificial Intelligence, where they are dismissive of an algorithmic AI, saying something is missing, can't be any kind of shuffling of symbols, which is what a Turing machine is only capable of. I don't know how to reconcile between what Deutsch says and those saying the opposite, this was largely discussed using Searle with his Chinese Room thought experiment. To me, it's a flawed experiment. The whole system understands, that's the 'system' argument, Searles defended this objection in his original paper,
Minds, Brains, and Computing, and that defense I don't think works at all, it's more assertion without enough reasoning. I argued this in the comments for that blog with:
He posits the case that he could ‘internalize’ the rules and carry out conversations passing back and forth only squiggles and he still wouldn’t understand Chinese. I can only say WTF? A set of rules allowing for arbitrary conversations? Allowing him to answer ‘how do you feel’, ‘what color is my dress’, ‘how much did you enjoy third grade’, ‘what did you think of Dan and Massimo’s treatment of the Chinese Room thought experiment’, or ‘between Yale, Harvard and MIT, which school would I do better at if I opened a coffee shop called Gedanken Donuts’. That is one hell of a set of rules. If he doesn’t understand Chinese, then he is doing something far far more difficult.
I got this response from Dan:
You seem to be doing what several others have done, namely identify precisely what is crazy about Strong AI and then blame it on Searle.
It is the Strong AI proponent who claims that in understanding Chinese, we are doing what a computer does.
Computers *do* follow instructions, whose substance consists entirely of manipulating symbols based on nothing but their syntactic properties (shapes).
It is *this* that Searle is demonstrating cannot be correct, by way of his thought experiment.
I can't understand this response, the comments were closed before I could reply, and now
Scientia is no more, so maybe someone here can tell me why his reply isn't bad, really bad. I'm arguing, and giving reasons why, that Searle's setup does lead to a system that understands, and I get back that I'm looking at it wrong, and because! Searle's demonstrated he's right. Isn't that kinda like 'the bible is true because it's god's word and I know it's gods word because it says it is”?
These arguments often are irrelevant, no one can define intelligence well, or consciousness, can't decide if they're talking about machines that think
like humans, or think as well as, or whether consciousness is required, etc etc. They also get bogged down on the substrate issue, which really seems irrelevant to me. Dennett says neurons aren't conscious, minds are. Can you have an emergent system, built of algorithmic modules, that can then do better than mere algorithms? And there is never enough attention paid to what would 'motivate' a machine, what would they 'want' to do, why 'want' something at all? Humans become almost like the lobotomized if their emotional systems are damaged or otherwise inoperative, they can't really do anything, you have to
want to do something before you act, you have to like one option of what to do next over all others before you can act. There is no reason machines can't be made able to out think us by vast margins in virtually every way, the question of whether it's
intelligent is up to the vagaries of language.
“When you're born into this world, you're given a ticket to the freak show. If you're born in America you get a front row seat.”
-George Carlin, who died 2008. Ha, now we have human centipedes running the place