Moderators: Calilasseia, ADParker
Stephen Colbert wrote:Now, like all great theologies, Bill [O'Reilly]'s can be boiled down to one sentence - 'There must be a god, because I don't know how things work.'
Sityl wrote:All of the diagrams in Twistor's posts look dirty to me.
The Damned wrote:hackenslash wrote:Donwloaded for later.
Incidentally, I have asked for this topic to be stickied. It would be a shame to lose such brilliant work. I am reading and re-reading until it sinks in.
I don't think I have the maths yet to really get to grips with it, but hell as you say its worth some serious effort. Kudos guys.
Is LQG more testable than string theory, ie by experiment, if so can anyone explain how?
Sityl wrote:All of the diagrams in Twistor's posts look dirty to me.
The Damned wrote:The Damned wrote:hackenslash wrote:Donwloaded for later.
Incidentally, I have asked for this topic to be stickied. It would be a shame to lose such brilliant work. I am reading and re-reading until it sinks in.
I don't think I have the maths yet to really get to grips with it, but hell as you say its worth some serious effort. Kudos guys.
Is LQG more testable than string theory, ie by experiment, if so can anyone explain how?
Guess not, well science is always ready to see new ideas, but unless they pay the bills, then what are they worth?
twistor59 wrote:The Damned wrote:The Damned wrote:
I don't think I have the maths yet to really get to grips with it, but hell as you say its worth some serious effort. Kudos guys.
Is LQG more testable than string theory, ie by experiment, if so can anyone explain how?
Guess not, well science is always ready to see new ideas, but unless they pay the bills, then what are they worth?
Nope, I'm afraid it isn't developed enough to the stage where it makes testable predictions yet. It's barely reached the stage where you can show that it's classical limit is standard GR. So still quite early days (compared to string theory). It's been around for - maybe- 15 years or so with only a handful of people working on it, so not entirely surprising. Some of the cosmological applications of the theory ("loop quantum cosmology") make predictions of non singular initial conditions, and this may have some implications for structure formation (galaxies etc) in the early universe, and CMB statistics, but I don't know the details.
The Damned wrote:Ok I'm a bit numpty. Can someone break this down in simple calculus terminology because I'm having a hard time following what exactly is meant by the topology.
The Damned wrote:
I understand eigen states you don't have to dumb it down too much but those eigenvectors look strange!?
I mean they appear not to be the usual uniform vectors but seem to be arbitrarily non Euclidean in form. Why is that and am I just wrong?
Vector mathematics is ok but I haven't studied tensors yet so if you want to expand my matrix understanding that's fine. just to set some parameters.
twistor59 wrote:The Damned wrote:Ok I'm a bit numpty. Can someone break this down in simple calculus terminology because I'm having a hard time following what exactly is meant by the topology.
Defining the topology of a set is basically giving it the property of a shape. Well almost - when you've defined its topology, you've only defined the shape up to equivalence under continuous deformations. By this I mean that two sets A and B "have the same topology" if you can continously deform A into B. Continously means without ripping holes in it, or gluing bits together.
Take for example this set:
{x ∊ ℝ s.t. (0<=x<=1) }
i.e. just the real interval between 0 and 1. You can change its topology by gluing ("identifying") the points zero and one. The set then has the topology of a circle. If you head on up towards 1, you eventually pop out at 0 again.
Topologies are rigorously defined using the concept of open sets. (Tons of stuff on it via google).
Sets in simple spaces like ℝN come ready equipped with an "obvious" topology - the metric topology, where the open sets are defined in terms of the distance measures obtained from the metric.The Damned wrote:
I understand eigen states you don't have to dumb it down too much but those eigenvectors look strange!?
I mean they appear not to be the usual uniform vectors but seem to be arbitrarily non Euclidean in form. Why is that and am I just wrong?
Vector mathematics is ok but I haven't studied tensors yet so if you want to expand my matrix understanding that's fine. just to set some parameters.
Which eigenvectors look strange ? The eigenvectors in the LQG posts are elements of Hilbert spaces and you can add, subtract and multiply by scalars just like any ordinary vector space. The fundamental difference is that the Hilbert spaces may be infinite dimensional.
The Damned wrote:
Would it be useful to say that infinitely eliptical or not ie there is no limit to the amount of bending we can do as long as we specify limtis? Maths is useful here in that respect?
twistor59 wrote:The Damned wrote:
Would it be useful to say that infinitely eliptical or not ie there is no limit to the amount of bending we can do as long as we specify limtis? Maths is useful here in that respect?
Well, if you take a circle, and bend it fucking hard, you can put four kinks into it and make it a square. But this is perfectly acceptable - the circle and the square have the same topology. Even bending this much is still a "continous deformation". Just don't break it !
The maths only becomes ugly when you try to formulate things totally rigorously and specify everything in all its gory detail.
The Damned wrote:
In topology linearity is quite important. Hence the Fields medal have you seen the proof for that in computer graphics the Pearlman thing I thought it was beyond brilliant, I wish I could even approach thinking like that!
The problem
Main article: Poincaré conjecture
The Poincaré conjecture, proposed by French mathematician Henri Poincaré in 1904, was the most famous open problem in topology. Any loop on a three-dimensional sphere—as exemplified by the set of points at a distance of 1 from the origin in four-dimensional Euclidean space—can be contracted to a point. The Poincaré conjecture asserts that any closed three-dimensional manifold such that any loop can be contracted to a point is topologically a three-dimensional sphere. The analogous result has been known to be true in dimensions greater than or equal to five since 1960 (work of Stephen Smale). The four-dimensional case resisted longer, finally being solved in 1982 by Michael Freedman. But the case of three-manifolds turned out to be the hardest of them all. Roughly speaking, this is because in topologically manipulating a three-manifold, there are too few dimensions to move "problematic regions" out of the way without interfering with something else.
In 1999, the Clay Mathematics Institute announced the Millennium Prize Problems: $1,000,000 prizes for the proof of any of seven conjectures, including the Poincaré conjecture. There was a wide agreement that a successful proof of any of these would constitute a landmark event in the history of mathematics.
[edit] Perelman's proof
Main article: Solution of the Poincaré conjecture
In November 2002, Perelman posted the first of a series of eprints to the arXiv, in which he claimed to have outlined a proof of the geometrization conjecture, of which the Poincaré conjecture is a particular case.[10][11][12]
Perelman modified Richard Hamilton's program for a proof of the conjecture, in which the central idea is the notion of the Ricci flow. Hamilton's basic idea is to formulate a "dynamical process" in which a given three-manifold is geometrically distorted, such that this distortion process is governed by a differential equation analogous to the heat equation. The heat equation describes the behavior of scalar quantities such as temperature; it ensures that concentrations of elevated temperature will spread out until a uniform temperature is achieved throughout an object. Similarly, the Ricci flow describes the behavior of a tensorial quantity, the Ricci curvature tensor. Hamilton's hope was that under the Ricci flow, concentrations of large curvature will spread out until a uniform curvature is achieved over the entire three-manifold. If so, if one starts with any three-manifold and lets the Ricci flow occur, eventually one should in principle obtain a kind of "normal form". According to William Thurston, this normal form must take one of a small number of possibilities, each having a different kind of geometry, called Thurston model geometries.
This is similar to formulating a dynamical process that gradually "perturbs" a given square matrix, and that is guaranteed to result after a finite time in its rational canonical form.
Hamilton's idea attracted a great deal of attention, but no one could prove that the process would not be impeded by developing "singularities", until Perelman's eprints sketched a program for overcoming these obstacles. According to Perelman, a modification of the standard Ricci flow, called Ricci flow with surgery, can systematically excise singular regions as they develop, in a controlled way.
We know that singularities (including those that, roughly speaking, occur after the flow has continued for an infinite amount of time) must occur in many cases. However, any singularity that develops in a finite time is essentially a "pinching" along certain spheres corresponding to the prime decomposition of the 3-manifold. Furthermore, any "infinite time" singularities result from certain collapsing pieces of the JSJ decomposition. Perelman's work proves this claim and thus proves the geometrization conjecture.
[edit] Verification
The Damned wrote:
There was a graphical representation of his topology of dimensions that could support the proof. An Mpeg describing the folding of a sphere in 3 dimensions it was amazingly complex and yet brilliantly elegant.
I can't find it atm but it is fascinating.
twistor59 wrote:The Damned wrote:
There was a graphical representation of his topology of dimensions that could support the proof. An Mpeg describing the folding of a sphere in 3 dimensions it was amazingly complex and yet brilliantly elegant.
I can't find it atm but it is fascinating.
Post it in maths thread if you find it
hackenslash wrote:Twistor has been nominated for an Orson for this excellent thread. On reflection, I am awarding a double.
Re: What exactly is loop quantum gravity?
It's difficult w/o math.
The problem with quantum gravity is that naive mechanisms to quantize gravity (which have been applied successfully to other fields) fail for gravity. That means that something fundamental has to be changed for quantum gravity.
There are different approaches to solve these problems, e.g.
a) string theory
b) asymptotic safety
c) loop quantum gravity (LQG)
I don't want to comment on a) and b) here.
Essentially LQG does the following: it introduces new variables which replace the (in GR) well-known metric that describes spacetime + curvature. This is pure math, so I don't want to go into details here, but what happens is that these new variables are rather close to fields that we know from gauge theory like QED and QCD. Indeed in a certain sense gravity looks rather similar to QCD, but there is one additional property of gravity that allowes one to apply a second mathematical trick which essentially replaces the fundamental fields with something like "fluxes through surfaces" or "fluxes along circles". These surfaces and circles are embedded into spacetime.
The next step is again rather technical and it becomes possible due to so-called diffeomorphism invariance: one can get rid of the the embedding of circles and surfaces into spacetime. Instead one replaces these entities with a so-called spin network, i.e. a graph with nodes and links between nodes where each link and each node carries some numbers which represent abstract entities from which certain properies of spacetime can be reconstructed. You can think about spacetime as made of cells (I will soon tell you that you can't ; each cell has a certain volume carried by a node; each cell has certain surfaces and the link between different nodes (sitting inside these cells) carry the areas of the surfcaes.
The problem with this picture is that one might think about these cells as sitting in spacetime - but this is fundamentally wrong: this picture is only due to the construction, but basically there is no spacetime anymore; all there is are nodes and links (and certain numbers attributed to nodes and links). Spacetime is no longer fundamental but becomes an entity emerging from the more fundamental graphs with their nodes and links. The graphs are called spin networks b/c the numbers they are carrying have properties well-known from spins. But this is a mathematical property only, it does not means that there are real spinning objects.
Compare this emerging spacetime to a water surface of a lake. We know that it consists of atoms, and as soon as we get this picture it is clear that there is no water between the atoms; the surface is only an emerging phenomenon, the true fundamental objects are the atoms. In the same sense the spin networks are the fundametal entities from which spacetime, surfaces etc. and their properties like volume, area, curvature etc. can be constructed. Dynamics of spacetime (which was curvature, gravitational waves etc. in GR) is replaced by dynamics of spin networks: within a given graph new nodes with new links can appear (there are mathematical rules, but I don't want to go into detail here).
The last puzzle I have for you is the fact that such a spin network is not a mechanical object which "is" spacetime. Instead quantized spacetime is a superposition of (infinitly many) spin networks. This is well-known in quantum mechanics; there is no reason why an atom should be in a certain state; we can achieve that via preparation or measurement, but in principle a single atom can be in an arbitrary complex quantum state which is a superposition of "an atom sitting here, an atom moving in a certain direction over there, an atom moving in this or that direction, ...".
So classical spacetime is recovered by two averaging process: first there seems to be a regime were this superposition of spin networks is peaked around a single classical spacetime, i.e. were one networks dominates the superposition of infinitly many spin networks; second from this single spin network one can reconstruct spacetime in the same sense as one can reconstruct the water surface from the individual atoms. But there may be different regimes (e.g. in black holes or closed to the big bang) where is classical picture and this averaging does no longer work. It may be that in these regimes all there is are spin networks w/o any classical property like smooth spacetime, areas, volume etc. It's like looking at a single atom: there is no water surface anymore.
Eventually this is why one started with this stuff: the classical picture of spacetime seems to become inconsistent when one tries to quantize it, i.e. when one defines these superpositins etc. These inconsistencies do not bother us as long as we talk about spacetime here, in the solar system etc. But they become a pain in the a... when we talk about spacetime near a singularity like a black hole or like the big bang. In order to understand these new non-classical regimes of spacetime a fundamentally new picture is required. This is what LQG (and other approaches) are aiming for: construct a new mathematical model from which well-known classical spacetime (like in GR) can be reconstrcuted, but which does not break down in certain regimes but remains well-defined and consistent.
Users viewing this topic: No registered users and 1 guest