Posted: Apr 15, 2017 8:06 am
by ProgrammingGodJordan
twistor59 wrote:I downloaded the supermanifold hypothesis paper from academia.edu. It contains an abstract, summary and references section, but no body.

Can you give an example of how you use Grassmann-valued coordinates in datasets in an ML context?


That is an optimal question.

(A)
Code: Select all
The supermanifold hypothesis in deep learning prescribes the clamping of the Grassmannian parameters in a particular regime, after which largely real numbers are usable... (In other words, some grasmannian bound properties are perhaps feasible, whence observations in deep neural models don't strictly require grassmann aligned numbers)

★★ Feasible properties lay in the boundary of 'eta' ★★, or direct numerical simulations etc, at least for an initial 'trivial' example of reinforcement like learning in this paradigm.

Other references: https://ncatlab.org/nlab/show/Euclidean+supermanifold


(B)
The body is actually present:

Image


Essentially, the neural bound supermanifold hypothesis' equation in summary, merely represents a consistency of particular layer-wise properties (homeomorphisms, bijective inverses, etc.) par input data transformation (pertinently, as it relates to some temporal difference paradigm). Norm calculations etc. offset the aforesaid properties' consistency, pertinently, abound some parametric oscillation paradigm, containing Zλ.



(C)
(i) Think of my supermanifold hypothesis in deep learning as merely a reachable, time/space complex optimal way to organize manifolds, as it relates to some temporal difference horizon.

(ii) The expression is that causal laws of physics are encodable such that these may compound in the aforesaid temporal difference paradigm.

Due to (ii), a superfield description emerges.

Image

On the above rendition, imagine some superior identity sequence.