Do you have a simpler/more correct sketch of ambient isotopy?
An intriguing mathematical encoding/basis of some paradigm in super-artificial intelligence.
Moderators: kiore, Blip, The_Metatron
ProgrammingGodJordan wrote:twistor59 wrote:I downloaded the supermanifold hypothesis paper from academia.edu. It contains an abstract, summary and references section, but no body.
Can you give an example of how you use Grassmann-valued coordinates in datasets in an ML context?
That is an optimal question.
(A)
- Code: Select all
The supermanifold hypothesis in deep learning prescribes the clamping of the Grassmannian parameters in a particular regime, after which largely real numbers are usable... (In other words, some grasmannian bound properties are perhaps feasible, whence observations in deep neural models don't strictly require grassmann aligned numbers)
★★ Feasible properties lay in the boundary of 'eta' ★★, or direct numerical simulations etc, at least for an initial 'trivial' example of reinforcement like learning in this paradigm.
Other references: https://ncatlab.org/nlab/show/Euclidean+supermanifold
(B)
The body is actually present:
Essentially, the neural bound supermanifold hypothesis' equation in summary, merely represents a consistency of particular layer-wise properties (homeomorphisms, bijective inverses, etc.) par input data transformation (pertinently, as it relates to some temporal difference paradigm). Norm calculations etc. offset the aforesaid properties' consistency.
(C)
(i) Think of my supermanifold hypothesis in deep learning as merely a reachable way to organize manifolds, as it relates to some temporal difference horizon.
(ii) The expression is that causal laws of physics are encodable such that these may compound in the aforesaid temporal difference paradigm.
Due to (ii), a superfield description emerges.
On the above rendition, imagine some superior identity sequence.
This is 'simply' because I detected evidence that life's meaning probably occurs on the horizon of optimization (Jeremy England “Dissipative Adaptation”…),
LucidFlight wrote:Great thread so far...
Keep It Real wrote:Jordan, I guess you do have some obscure esoteric mathematical knowledge pertaining to AI, but it's not accessible to most, and this forum is populated by non-mathematicians mainly. If you could explain the concepts in the OP to a community of non-mathematicians that would ingratiate you with me, at least, if not many others. I find AI fascinating but would rather you communicated on my level than have to learn the difficult math. All your esoterica does is alienate.
ProgrammingGodJordan wrote:I am a casual body builder/...This is 'simply' because I detected evidence that life's meaning probably occurs on the horizon of optimization...
Keep It Real wrote:Jordan, I guess you do have some obscure esoteric mathematical knowledge pertaining to AI, but it's not accessible to most, and this forum is populated by non-mathematicians mainly. If you could explain the concepts in the OP to a community of non-mathematicians that would ingratiate you with me, at least, if not many others. I find AI fascinating but would rather you communicated on my level than have to learn the difficult math. All your esoterica does is alienate.
ProgrammingGodJordan wrote:
Essentially, the neural bound supermanifold hypothesis' equation in summary, merely represents a consistency of particular layer-wise properties (homeomorphisms, bijective inverses, etc.) par input data transformation (pertinently, as it relates to some temporal difference paradigm). Norm calculations etc. offset the aforesaid properties' consistency, pertinently, abound some parametric oscillation paradigm, containing Zλ.
ProgrammingGodJordan wrote:Keep It Real wrote:Jordan, I guess you do have some obscure esoteric mathematical knowledge pertaining to AI, but it's not accessible to most, and this forum is populated by non-mathematicians mainly. If you could explain the concepts in the OP to a community of non-mathematicians that would ingratiate you with me, at least, if not many others. I find AI fascinating but would rather you communicated on my level than have to learn the difficult math. All your esoterica does is alienate.
As a lazy atheist/coder, I am not equipped with any such "obscure esoteric mathematical knowledge".
John Platko wrote:ProgrammingGodJordan wrote:Keep It Real wrote:Jordan, I guess you do have some obscure esoteric mathematical knowledge pertaining to AI, but it's not accessible to most, and this forum is populated by non-mathematicians mainly. If you could explain the concepts in the OP to a community of non-mathematicians that would ingratiate you with me, at least, if not many others. I find AI fascinating but would rather you communicated on my level than have to learn the difficult math. All your esoterica does is alienate.
As a lazy atheist/coder, I am not equipped with any such "obscure esoteric mathematical knowledge".
ProgrammingGodJordan, what programing environment does "a lazy atheist/coder" find optimal for deep learning projects?
I'm considering Keras - is that a good choice?
ProgrammingGodJordan wrote:John Platko wrote:ProgrammingGodJordan wrote:Keep It Real wrote:Jordan, I guess you do have some obscure esoteric mathematical knowledge pertaining to AI, but it's not accessible to most, and this forum is populated by non-mathematicians mainly. If you could explain the concepts in the OP to a community of non-mathematicians that would ingratiate you with me, at least, if not many others. I find AI fascinating but would rather you communicated on my level than have to learn the difficult math. All your esoterica does is alienate.
As a lazy atheist/coder, I am not equipped with any such "obscure esoteric mathematical knowledge".
ProgrammingGodJordan, what programing environment does "a lazy atheist/coder" find optimal for deep learning projects?
I'm considering Keras - is that a good choice?
I am mostly familiar with mxnet.
(I once used mxnet to do heart irregularty detection, using residual neural networks, in a kaggle competition. (Interesting note: My model presentation, or specifically, the result sequence created by the model, was ranked at 76/500+ in the world, near the end of the competition))
I am now experimenting with tensorflow.
Note: If you weren't aware before, these models require strong gpu(s).
(B)
I detect that like mxnet, keras abstracts away all the deep stuff.
So you can quickly put together some model, but you won't really know what is going on under neath.
As a start, I learnt how to encode a neural network from scratch: https://github.com/JordanMicahBennett/S ... -SENTIENCE
Other details:
(1) "Deep Learning Book" : http://www.deeplearningbook.org/(Includes detailed guidance by one of the fathers of deep learning Bengio)
(2) "Data science from scratch" : http://shop.oreilly.com/product/0636920033400.do (Very practical book, and easier to parse than deep learning book)
NOTE: (2) is easier to understand, but (1) contains a more wholesome picture of the grand image that is the deep learning scenario.
(C)
The following roster of deep learning libraries are useful:
https://github.com/zer0n/deepframeworks
John Platko wrote:ProgrammingGodJordan wrote:John Platko wrote:
Thanks for the info. The deep learning book is very hot. It has 7 holds on 2 copies in my library network, I had an easier time getting the Rolling Stones box set. I'll check out the rest of your links. It looks like Keras has a CPU only version. I'm not sure I want to program an algorithm from scratch - although I understand how there are advantages going that route.
crank wrote:Your OP is abstruse to say the least, it seems almost deliberately so. It doesn't help that you use the phrase "meaning of life", which we all know is meaningless, and follow it up with "probably occurs on the horizon of optimization". Is this a term of art I'm too obtuse to understand or is it of the nature of a 'fitness horizon', which is a straight forward idea, understandable by almost anyone?
crank wrote:Life's goal state? Can you define? Also, define 'horizon of optimization'. Are these terms of art? If so, they should be easy to define, if not, it's deliberately abstruse. When someone asks for clarification, and the response includes a repeat of the term in question, and more verbiage even less transparent, it's hard not to conclude deliberate abstruseness. Surprising something so trivial as that could be misunderstood.
(i)
Life's meaning probably occurs on the horizon of optimization:
(source: mit physicist, Jeremy England proposes new meaning of life)
Life's meaning probably occurs on the horizon of optimization:
(source: mit physicist, Jeremy England proposes new meaning of life)
Return to General Science & Technology
Users viewing this topic: No registered users and 1 guest