Posted: Aug 04, 2017 9:28 am
by GrahamH
jamest wrote:The biggie here is that 'computer science' doesn't have a fucking clue what emotions are or how to account for them within their 'logical' assessment of computer technology.

You nubwits think that science has all the answers, yet in terms of explaining 'you' it knows absolutely fuck all.

And yet, here you all are, like it's a Sunday and your faith impels you to do so.

A familiar theme, I think you'd agree.


The propblem you have jamest is that you have very fixed ideas. Computers apply fixed rules of binatry logic to build complex functions that are not logical analysis. Computers are not logical thinkers (and nor are humans) these are just some of the capabilities of numerical networks found in brains and artificial neural networks.

Granted it is early days for AI, but it is being applied to identifying violence, sexually explicit content and prediction of illegal behaviour. It is already doing things that humans don't know how to analyse logically.

https://cloud.google.com/blog/big-data/ ... vision-api

You may know the Cloud Vision API for its face, object, and landmark detection, but you might not know that the Vision API can also detect inappropriate content in images using the same machine learning models that power Google SafeSearch. Since we announced the Google Cloud Vision API GA in April, we’ve seen over 100 million requests for SafeSearch detection.


The principle of deep learning is to automatically build contexts for recognising the likeness of something, and the complexity of that thing and what constitutes a likeness has no definite boundaries. It can be arbitrarily complex. Given input from bio-sensors such an AI could tell your emotional state, for example. It could use that to predict your behaviour.
Unttil these things have been achieved to human level there is certainly room for doubt about what can be done, but there is no sound reason to assert it cannot be done.