Posted: Sep 05, 2017 5:06 pm
by GrahamH
VazScep wrote:
GrahamH wrote:
tuco wrote:That seems like a question for philosophy. Unless convinced otherwise, I will go with lets say AI scientists who consider such environment suitable for stated reasons. Ultimately, the point of such exercise is imo for AI to reach set goal(s). If we take the war element out, we are left with, well, managing economy.


I doubt the scientists give that much thought to such ethical or philosophical issues of unintended consequences

Maybe Elon Musk and Stephen hawking have a point

If AI is only good for war games and financial trading we would be better without it.
This sort of AI is good for any sort of problem that amounts to numerical optimisation, such as the logistics of food distribution or town planning, standard problems for which we generally get a good bang for the buck with machine learning.

It happens that Chess and Go were classic AI problems, and so there was a lot of kudos in beating all humans at them. Computer games are a trivial source of AI problems, since solving games is always an AI problem. Forget the war aspect. A non-cooperative game (which is the vast majority of games) can always be described in terms of warfare. This is a case of having an all two powerful metaphor.

Another reason games are targeted is because they provide a closed environment for a learning machine: all the information is there, and the game is formally specified.


I guess it would be more comforting for the AI to be trained on something other than zero-sum games because if we find ourselves in a zero-sum game with advanced AI we could well be screwed.
I'm not concerned with the war aspect as such, more that future AI may carry forward aspect of playing to win into new interactions.

Someone once speculated that setting a goal for AI of wining as many games of chess as possible could have dire unforeseen consequences if the AI took over as much computing resource as possible in order to meet that goal. Don't set a goal of eliminating human suffering in case an AI finds and implements the obvious (final) solution!