Moderators: Calilasseia, ADParker
DavidMcC wrote:Look Graham, that 3D explanation is correct, but none of the examples I have found on the internet are consistent with it! Yet you are defending them, and attacking me for pointing out the obvious fault that the rear wall in the room MUST appear as a trapazoid shape for the illusion to work correctly. All the actual images I found on ther internet have a rectangular shape as the rear wall, and larger figures on the right, that are supposed to only look larger. What do you not understand about that?
GrahamH wrote:DavidMcC wrote:Look Graham, that 3D explanation is correct, but none of the examples I have found on the internet are consistent with it! Yet you are defending them, and attacking me for pointing out the obvious fault that the rear wall in the room MUST appear as a trapazoid shape for the illusion to work correctly. All the actual images I found on ther internet have a rectangular shape as the rear wall, and larger figures on the right, that are supposed to only look larger. What do you not understand about that?
The back wall is a trapezoid but looks rectangular (from the viewing point). It hides normal depth cues so that the nearer person doesn't look closer and we see them a bigger rather than closer.
Take a clue from the fact that the whole Internet says you are wrong.
DavidMcC wrote:... I doubt that anything useful is going to come of this thread, now that it has been seriously derailed by Graham.
DavidMcC wrote:GrahamH wrote:DavidMcC wrote:Look Graham, that 3D explanation is correct, but none of the examples I have found on the internet are consistent with it! Yet you are defending them, and attacking me for pointing out the obvious fault that the rear wall in the room MUST appear as a trapazoid shape for the illusion to work correctly. All the actual images I found on ther internet have a rectangular shape as the rear wall, and larger figures on the right, that are supposed to only look larger. What do you not understand about that?
The back wall is a trapezoid but looks rectangular (from the viewing point). It hides normal depth cues so that the nearer person doesn't look closer and we see them a bigger rather than closer.
Plain wrong. The back wall IS rectangular in the images available. It should not be, as already explained.Take a clue from the fact that the whole Internet says you are wrong.
Going with the herd, are we?
Try thinking for yourself, for once, it's good for you.
Templeton wrote:Enjoying this discussion, though disillusionedwith the stubborn adherence to consciousness being the result of cognition; just seems that the there are so many dead ends. I'm more inclined to adhere to a quantum interpretation toward consciousness, it's more fun. Sure there are objections, but such is the case when the tools by which we measure reality are inadequate to measure the observation. The science should catch up to the imagination.
GrahamH #830 wrote:The point I made about asocial species vs social evolution of HP consciousness was not about conscious attribution of conscious/non conscious classes to objects. If a species that has not been subject to social selection pressures in its evolution has experiences, feels pain in it's tentacle, experiences thoughts about coconut shells (and we have no idea if that is the case) the HP consciousness in us probably did not evolve due to social pressures to model others as minds. Modelling others as minds is more EP then HP. A computer can do that. It's the recursion of modelling the modeller as mind that may resolve the HP (A Hoffsteader Strange Feedback Loop).
Templeton wrote:Enjoying this discussion, though disillusionedwith the stubborn adherence to consciousness being the result of cognition; just seems that the there are so many dead ends. I'm more inclined to adhere to a quantum interpretation toward consciousness, it's more fun. Sure there are objections, but such is the case when the tools by which we measure reality are inadequate to measure the observation. The science should catch up to the imagination.
zoon wrote:GrahamH #830 wrote:The point I made about asocial species vs social evolution of HP consciousness was not about conscious attribution of conscious/non conscious classes to objects. If a species that has not been subject to social selection pressures in its evolution has experiences, feels pain in it's tentacle, experiences thoughts about coconut shells (and we have no idea if that is the case) the HP consciousness in us probably did not evolve due to social pressures to model others as minds. Modelling others as minds is more EP then HP. A computer can do that. It's the recursion of modelling the modeller as mind that may resolve the HP (A Hoffsteader Strange Feedback Loop).
I’m going back to your post #830 here: when you say that modelling others as minds is Easy Problem stuff which even a computer can do, you seem to be implying that computers, however powerful, can’t have phenomenal experiences, they can never be programmed to be involved with the Hard Problem. Was that an implication which you would agree with? Could a sufficiently powerful computer be programmed to have autonomous control and to generate semantics?
David Talbot wrote:Although this was an incremental improvement statistically, it reflected a milestone in the field of affective computing. While people notoriously have a hard time articulating how they feel, now it is clear that machines can not only read some of their feelings but also go a step farther and predict the statistical likelihood of later behavior.
DavidMcC wrote:... I doubt that anything useful is going to come of this thread, now that it has been seriously derailed by Graham.
GrahamH wrote:If robots (embodied computers) 'evolve' through genetic and other heuristic self-programming methods, through social interaction with humans, interpret their own function in subjective semantics - if they model themselves as sentient beings, I think it would be hard to justify not regarding them as phenomenally conscious in much the same sense as humans.
If phenomenal consciousness in humans is cognitive it seems quite plausible to me that some non-biological systems might generate subjective semantics that would be comparable to human phenomenal experience.
If computers can be programmed to develop their own subjective semantics, and if a self-model cognitive hypothesis of consciousness holds up, there is no hard problem, because what seemed insolubly 'hard' becomes 'easy'.
zoon wrote:GrahamH wrote:If robots (embodied computers) 'evolve' through genetic and other heuristic self-programming methods, through social interaction with humans, interpret their own function in subjective semantics - if they model themselves as sentient beings, I think it would be hard to justify not regarding them as phenomenally conscious in much the same sense as humans.
If phenomenal consciousness in humans is cognitive it seems quite plausible to me that some non-biological systems might generate subjective semantics that would be comparable to human phenomenal experience.
If computers can be programmed to develop their own subjective semantics, and if a self-model cognitive hypothesis of consciousness holds up, there is no hard problem, because what seemed insolubly 'hard' becomes 'easy'.
Again, I’m not sure whether I’ve understood what you are saying here. You seem to be saying that if a computer/robot were built to reprogram itself through social interaction with humans, then it might become phenomenally conscious, whereas if it were set up with a program, without the self-programming or social interaction, then it could not have phenomenal consciousness. ?
If that is what you are claiming, then I am puzzled, because it seems to me that if a computer can program itself to do something (including to have experiences), then there’s no essential reason why it couldn’t be programmed, or directly hard-wired, to do the same thing.
zoon wrote:(My own take is that the Hard Problem exists because we are confused robots; we’ve evolved to model and interpret others and ourselves as essentially autonomous individuals with essentially private phenomenal consciousness, but scientific discoveries are showing us that that interpretation is in the end mistaken. Since programming robots to be confused is not in principle an impossible task, I would see the ‘hard’ problem as reducible to the ‘easy’ one.)
GrahamH wrote:.....
We don't know how to write a conscious program, and my guess is we never will. What we might do is write a program that can develop consciousness. This is a likely practical limit on human capability to understand the details of such a system.......
zoon wrote:...
(My own take is that the Hard Problem exists because we are confused robots; we’ve evolved to model and interpret others and ourselves as essentially autonomous individuals with essentially private phenomenal consciousness, but scientific discoveries are showing us that that interpretation is in the end mistaken. Since programming robots to be confused is not in principle an impossible task, I would see the ‘hard’ problem as reducible to the ‘easy’ one.)
DavidMcC wrote:zoon wrote:...
(My own take is that the Hard Problem exists because we are confused robots; we’ve evolved to model and interpret others and ourselves as essentially autonomous individuals with essentially private phenomenal consciousness, but scientific discoveries are showing us that that interpretation is in the end mistaken. Since programming robots to be confused is not in principle an impossible task, I would see the ‘hard’ problem as reducible to the ‘easy’ one.)
I was not aware that the privacy of phenomenal consciousness was "just an illusion", and one that has been exposed as such by "scientific discoveries". Perhaps you can point us to them.
GrahamH wrote:zoon wrote:(My own take is that the Hard Problem exists because we are confused robots; we’ve evolved to model and interpret others and ourselves as essentially autonomous individuals with essentially private phenomenal consciousness, but scientific discoveries are showing us that that interpretation is in the end mistaken. Since programming robots to be confused is not in principle an impossible task, I would see the ‘hard’ problem as reducible to the ‘easy’ one.)
That is the basic cognitive consciousness scenario, yes. There might not be any qualia, but there are meaningful references to such things, if we can count meaning as grounded in real world relations and function.
I think there is a significant element of autonomy - working it out for itself - that is central to consciousness. That is core of my objection to classing puppets or rocks as conscious. Such things cannot work anything out for themselves, least of all by working out how to model their own behaviour.
P.S. 'Become phenomenally conscious' is dangerous phrasing. I understand you mean equivalent to a human, so that may be that the machine generates cognitive illusions of self having experiences and attributes that illusion to its own system such that it recognises itself as the illusory self-experience.
Return to Psychology & Neuroscience
Users viewing this topic: No registered users and 1 guest