Posted: Sep 06, 2010 7:54 am
by twistor59
Post 10 Summary
I thought I’d try to summarise what I’ve gathered about this subject so far:

There are well known methods for quantizing classical field theories. The archetypical example is the quantization of classical electromagnetism, giving rise to quantum electrodynamics. Applying these methods to the gravitational field produces a theory which is non-renormalizable. (M.H. Goroff and A. Sagnotti, Nucl.Phys.B266, 709 (1986)). This means that the usual process you apply (separating integrals into divergent and convergent parts) doesn’t work for gravity. Loop quantum gravity is one line of approach to try to produce a model which doesn’t have these problems.

The breakthrough that allowed LQG to start was Abhay Ashetkar’s reformulation of general relativity in a new set of variables. These variables are defined for the case where we “chop up” spacetime into a bunch of three-space slices. These variables allow GR to be formulated as a gauge theory. There are three sets of constraints which the theory must implement:

“Gauss Constraint” – to ensure it’s invariant under local Lorentz transformations
“Diffeomorphism Constraint” – to ensure it’s invariant under arbitrary coordinate changes on the three-space slices
“Hamiltonian Constraint” – to implement the time evolution
Applying the rules of quantization for constrained systems to the Ashtekar variables did not produce a theory for which it was possible to extract a satisfactory classical limit.

This problem was addressed by using holonomies as the basic variable to be quantized. The holonomy (in this context) is the map from the gauge group into itself generated by parallel transport along a given curve. The conjugate variable to the holonomy is a “flux”.

The Hilbert space defined this way would be astronomically huge (even for an infinite dimensional space, there’s infinite and there’s INFINITE !), so we restrict the Hilbert space to the holonomies generated along the links of a graph with nodes and edges. Think of the graph as defining a sort of “skeleton” in the manifold along which we “feel out” the connection. Diffeomorphism invariance is ensured by considering graphs as identical if they can be smoothly deformed into one another.

The Gauss constraint and diffeomorphism constraints are nicely respected by this construction, and they allow area and volume operators to be defined. Areas and volumes, as eignenvalues of the respective operators, turn out to be quantized. A graph node represents a “grain of space” in Rovelli’s terminology and two grains of space are adjacent if there is a link in the graph between the corresponding nodes. The spin number assigned to the link between a pair of nodes represents the area of the boundary between the grains.

The spin network states form a basis for a Hilbert space and they can be thought of as quantum versions of three-geometries (i.e. the geometries of three-spaces). Area and volume operators can be defined on the Hilbert space, and their eigenvalues are physical areas and volumes. These areas and volumes turn out to be quantised. A generic state in the Hilbert space is a quantum superposition of three-geometries.

There have been several approaches to "deriving" spin networks and their rules from classical general relativity, and all have ended up at the same place. The focus in the LQG community seems to be now shifting from trying to derive LQG to starting from the LQG rules and seeing where they lead us. This is the important task, as it is this which will (hopefully) lead to predicitions from the theory. Along the way to this goal, it is necessary to set up a framework within which we can do explicit calculations. In particular, calculations involving the dynamics:

For the dynamics, attempts have been made to define a Hamiltonian operator, which generates time evolution. It achieves this by changing a given spin network by changing the nodes and links. The Hamiltonian operator is not altogether satisfactory – it contains a number of arbitrary parametrisations. There is a parallel approach to describing the dynamics of quantum gravity, known as the spinfoam approach. Spinfoams are a higher dimensional version of spin networks, where spin quantum numbers are assigned to faces rather than just nodes and edges.

The latest approach to dynamics uses spinfoam vertices to compute transition amplitudes. In traditional quantum theory, the goal of the dynamics is to compute a complex amplitude given an initial and a final Hilbert space state. This is done (in the path integral picture) by summing over all the possibilities “in between” which are compatible with those initial and final states. In background-independent quantum gravity, we can’t really adopt this approach because we have no a-priori spacetime to give us a notion of what “initial” and “final” means – i.e. we have no “time” with which to define t=+∞ and t=-∞. Instead, the problem is phrased as “how do we compute an amplitude for any given quantum 3-geometry state Ψ ? Ψ is going to be thought of as a state representing the three geometry bounding “spacetime”, and we compute the amplitude by summing over all the spacetime possibilities which are compatible with it.

Being a three-geometry state, Ψ is decomposed in terms of spin networks. The desired amplitude is computed by summing over all spin foams that have these spin networks as their boundary. For a given spin foam in the sum, each vertex of the spin foam contributes to the amplitude. Each vertex is evaluated by building a small private spin network around it and applying some simple computation rules, based on group theory and the spin foam parameters. Rules for extracting an amplitude from a vertex bring to mind Feynman rules for vertices in standard quantum field theory.

This sum of products of spin foam vertices is generally referred to as the “spinfoam sum”. As with a Feynman perturbation expansion, the full sum is intractable, and approximations are needed to extract numbers. There are several ways to do this, but one way is to choose a boundary state which is peaked around a three-geometry which is “large” compared to the Planck length. If this is done, then the vertex amplitude comes out as the exponential of the Regge action. The Regge action is precisely that which we obtain from a direct discretization of the classical Einstein equations. The significance of this is that it appears possible to encode the Einstein equations in a simple combinatorial model using some straightforward group theory constructs.