Posted: Oct 19, 2018 9:50 am
by zoon
scott1328 wrote:
zoon wrote:
scott1328 wrote:What is this "ultimate free will" you are referring to? Is it as incoherent as "could have done otherwise". We don't even actually know that our machines are "ultimately determinate" that is to say "could NOT have done otherwise."

These terms are meaningless. There is no way at all possible to decide if something could have done otherwise or not.

I agree with the narrator of your video (transcript here) when he argues that free will is a useful concept in the context of using punishment and reward to modify behaviour.

I also think that if neuroscience had reached the point where behaviour could easily be modified directly by altering brain structures, then there would be no point in using punishment and reward, because altering the structure of the brain would give far more detailed control. Punishment, reward and free will would become redundant concepts.

So my view is that free will is a useful concept in the context of our current ignorance of brain mechanisms, but that it would drop out of use if we understood ourselves fully in scientific terms. It’s in that sense that I’m saying free will is not ultimate.

Certainly, we don’t yet know for certain that we are determinate, but the evidence all points that way?

Don’t you get that it is the agent’s ability to evaluate consequences that is the sine qua non of free will.

I agree with you, and with the narrator of your video (transcript linked in my quoted post above) that the ability to evaluate consequences is essential for us to ascribe free will, but I don’t think it’s enough. In the video, the narrator first assumes it’s enough, and says that anything which can evaluate the probable consequences of an action, and so can respond to the threat of punishment, can be said to have free will. However, he backtracks almost immediately afterwards, when he says that a cat does not have free will, even though a cat can evaluate consequences. He then redefines free will, and says that something has free will if it can evaluate consequences from what happens to another individual, as well as from what happens to itself. Since, in his opinion, cats cannot do this, he says that cats can be trained, but they don’t have free will, so “punishment” is not the appropriate term when training cats. Apart from my view that he’s underestimating cats, I think his definition of free will has now become much more complicated and fuzzy, it’s lost the virtue of simplicity, and I still don’t agree with it. I agree that we don’t normally ascribe free will to cats, but I don’t think the reason is based only on whether a cat can work out consequences from what happens to another cat. Again, I think this ability is essential but not enough; my view of free will is that it’s even more complicated and fuzzier. I’m not laying claim to scientific accuracy, only to the usefulness of a concept which is still central to the way humans organise social life.

In particular, the narrator of the video claims that a robot has free will if it has been programmed to respond both to punishment and to its observations of another robot being punished. I think the science of robotics is reaching the stage where a robot might be programmed to achieve this, at least in some simple situations, and I don’t think I would be inclined to say that such a robot has free will while a cat doesn’t? Unlike a person, such a robot could be re-programmed in detail, and the re-programming would not be regarded as unethical. Quoting from the transcript:
To illustrate, imagine a possible world wherein every home comes preinstalled with its own robot butler. Now imagine that, for whatever reason, our butlers tend to act out in strange ways. For instance, maybe they smash up our dishes and then rearrange our furniture while we sleep. Under most circumstances, we would simply correct the malfunction by tracking down the faulty lines of code and then updating them accordingly. In the future, however, there might not be any code to fix. Most machine learning algorithms today are not based pure, iterative logic, but on neural networks derived from fitness functions acting on the raw experience of the environment itself. Thus, if we ever want to correct our robots’ misbehaviors, we may actually have to train them through the institution of reward and punishment. And if, by some happenstance, our robots reach a point wherein they can learn from the experiences of each other, then we wouldn’t have to train them all individually to achieve the desired result. Instead, we could single out an individual robot and then make a very public spectacle out its punishment. If doing so results in a marked deterrence of future misbehaviors, then we will have officially satisfied the definition of free will. And why not? For all practical purposes, that’s basically how we govern human social behaviors already, so it makes perfect sense to describe a hypothetical robot population in exactly the same terms.

My main beef with that passage it that it describes a markedly silly way to build robot butlers. Why on earth train them via the slow and inefficient method of punishment and reward, when they could perfectly well be programmed to do what you want in the first place? This would be very much simpler, as well as giving better results.

When dealing with people, we don’t have the option (yet) of reprogramming, because we don’t know how. We are a social species which has evolved a wonderfully complex collection of systems which enable us to cooperate closely in groups while still being competitive individuals. A unique aspect of human groups (as contrasted with other animal groups) is that they set up systems of rules, and any individual who breaks a rule is liable to be ganged up on by the rest of the group. This is one of the ways evolution has for practical purposes squared the circle (well enough, not perfectly) of getting competitive individuals to cooperate and pass on their genes more effectively. If this ganging up on rule-breakers is going to work as a useful deterrent, then it does make sense to check first whether the rule-breaker chose to break the rule, or whether they were forced or incompetent (e.g. ill), and this is where the concept of free will comes in. It seems to me that free will is entangled with the other prescientific concepts which we still need to use for cooperating effectively with people in our own group. For example, a person is taken to have subjective experiences, they can suffer, and causing another person to suffer without good reason is a central reason for punishing people – i.e. causing more suffering, but this time for good reason. I’m with Sapolsky when he says it’s complicated.