In quantum physics, the Heisenberg uncertainty principle states that certain pairs of physical properties, like position and momentum, cannot both be known to arbitrary precision. That is, the more precisely one property is known, the less precisely the other can be known. It is impossible to measure simultaneously both position and velocity of a microscopic particle with any degree of accuracy or certainty. This is not only a statement about the limitations of a researcher's ability to measure particular quantities of a system, following the tenets of logical positivism, it is a statement about the nature of the system itself.
In quantum mechanics, a particle is described by a
wave. The position is where the wave is concentrated and the momentum is the wavelength. The position is uncertain to the degree that the wave is spread out, and the momentum is uncertain to the degree that the wavelength is ill-defined.
The only kind of wave with a definite position is concentrated at one point, and such a wave has an indefinite wavelength. Conversely, the only kind of wave with a definite wavelength is an infinite regular periodic oscillation over all space, which has no definite position. So in quantum mechanics, there are no states that describe a particle with both a definite position and a definite momentum. The more precise the position, the less precise the momentum.
The uncertainty principle can be restated in terms of measurements, which involves
collapse of the wavefunction. When the position is measured, the wavefunction collapses to a narrow bump near the measured value, and the momentum wavefunction becomes spread out. The particle's momentum is left uncertain by an amount inversely proportional to the accuracy of the position measurement. The amount of left-over uncertainty can never be reduced below the limit set by the uncertainty principle, no matter what the measurement process.
This means that the uncertainty principle is related to the
observer effect, with which it is often conflated. The uncertainty principle sets a lower limit to how small the momentum disturbance in an accurate position experiment can be, and vice versa for momentum experiments.
A mathematical statement of the principle is that every quantum state has the property that the
root-mean-square (RMS) deviation of the position from its mean (the standard deviation of the X-distribution):
times the RMS deviation of the momentum from its mean (the standard deviation of P):
can never be smaller than a fixed fraction of
Planck's constant:
Any measurement of the position with accuracy
collapses the quantum state making the standard deviation of the momentum larger than .
Contents[
hide]
1 Historical introduction
2 Uncertainty principle and observer effect
2.1 Heisenberg's microscope
3 Critical reactions
3.1 Einstein's slit
3.2 Einstein's box
3.3 EPR measurements
3.4 Popper's criticism
4 Refinements
4.1 Entropic uncertainty principle
5 Derivations
5.1 Physical interpretation
5.2 Matrix mechanics
5.3 Wave mechanics
5.4 Symplectic geometry
6 Robertson–Schrödinger relation
6.1 Other uncertainty principles
7 Energy-time uncertainty principle
8 Uncertainty theorems in harmonic analysis
8.1 Benedicks's theorem
8.2 Hardy's uncertainty principle
9 Popular culture
10 See also
11 Notes
12 References
13 External links
//

[edit] Historical introduction
Main article:
Introduction to quantum mechanics
Werner Heisenberg formulated the uncertainty principle in Niels Bohr's institute at Copenhagen, while working on the mathematical foundations of quantum mechanics.
In 1925, following pioneering work with
Hendrik Kramers, Heisenberg developed matrix mechanics, which replaced the ad-hoc old quantum theory with modern quantum mechanics. The central assumption was that the classical motion was not precise at the quantum level, and electrons in an atom did not travel on sharply defined orbits. Rather, the motion was smeared out in a strange way: the time Fourier transform only involving those frequencies that could be seen in quantum jumps.
Heisenberg's paper did not admit any unobservable quantities like the exact position of the electron in an orbit at any time; he only allowed the theorist to talk about the Fourier components of the motion. Since the Fourier components were not defined at the classical frequencies, they could not be used to construct an exact trajectory, so that the formalism could not answer certain overly precise questions about where the electron was or how fast it was going.
The most striking property of Heisenberg's infinite matrices for the position and momentum is that they do not commute. Heisenberg's
canonical commutation relation tells you by how much:
and this result did not have a clear physical interpretation in the beginning.
In March 1926, working in Bohr's institute, Heisenberg realized that the non-commutativity implies the uncertainty principle. This was a clear physical interpretation for the non-commutativity, and it laid the foundation for what became known as the
Copenhagen interpretation of quantum mechanics. Heisenberg showed that the commutation relations implies an uncertainty, or in Bohr's language a complementarity.[1] Any two variables that do not commute cannot be measured simultaneously—the more precisely one is known, the less precisely the other can be known.
One way to understand the complementarity between position and momentum is by
wave-particle duality. If a particle described by a plane wave passes through a narrow slit in a wall like a water-wave passing through a narrow channel, the particle diffracts and its wave comes out in a range of angles. The narrower the slit, the wider the diffracted wave and the greater the uncertainty in momentum afterwards. The laws of diffraction require that the spread in angle Δθ is about λ / d, where d is the slit width and λ is the wavelength. From the de Broglie relation, the size of the slit and the range in momentum of the diffracted wave are related by Heisenberg's rule:
In his celebrated paper (1927), Heisenberg established this expression as the minimum amount of unavoidable momentum disturbance caused by any position measurement
[2], but he did not give a precise definition for the uncertainties Δx and Δp. Instead, he gave some plausible estimates in each case separately. In his Chicago lecture[3] he refined his principle:
(1)
But it was Kennard
[4] in 1927 who first proved the modern inequality:
(2)
where , and σx, σp are the standard deviations of position and momentum. Heisenberg himself only proved relation (2) for the special case of Gaussian states.
[3].

[edit] Uncertainty principle and observer effect
The uncertainty principle is often stated this way:
The measurement of position necessarily disturbs a particle's momentum, and vice versa
This makes the uncertainty principle a kind of
observer effect.
This explanation is not incorrect, and was used by both Heisenberg and Bohr. But they were working within the philosophical framework of
logical positivism. In this way of looking at the world, the true nature of a physical system, inasmuch as it exists, is defined by the answers to the best-possible measurements which can be made in principle. So when they made arguments about unavoidable disturbances in any conceivable measurement, it was obvious to them that this uncertainty was a property of the system, not of the devices.
Today, logical positivism has become unfashionable in many cases, so the explanation of the uncertainty principle in terms of observer effect can be misleading. For one, this explanation makes it seem to the non positivist that the disturbances are not a property of the particle, but a property of the measurement process— the particle secretly does have a definite position and a definite momentum, but the experimental devices we have are not good enough to find out what these are. This interpretation is not compatible with standard quantum mechanics. In quantum mechanics, states which have both definite position and definite momentum at the same time just don't exist.
This explanation is misleading in another way, because sometimes it is a failure to measure the particle that produces the disturbance. For example, if a perfect photographic film contains a small hole, and an incident
photon is not observed, then its momentum becomes uncertain by a large amount. By not observing the photon, we discover indirectly that it went through the hole, revealing the photon's position.
The third way in which the explanation can be misleading is due to the nonlocal nature of a quantum state. Sometimes, two particles can be
entangled, and then a distant measurement can be performed on one of the two. This measurement should not disturb the other particle in any classical sense, but it can sometimes reveal information about the distant particle. This restricts the possible values of position or momentum in strange ways.
Unlike the other examples, a distant measurement will never cause the overall distribution of either position or momentum to change. The distribution only changes if the results of the distant measurement are known. A secret distant measurement has no effect whatsoever on a particle's position or momentum distribution. But the distant measurement of momentum for instance will still reveal new information, which causes the total wavefunction to collapse. This will restrict the distribution of position and momentum, once that classical information has been revealed and transmitted.
For example If two photons are emitted in opposite directions from the decay of
positronium, the momenta of the two photons are opposite. By measuring the momentum of one particle, the momentum of the other is determined, making its momentum distribution sharper, and leaving the position just as indeterminate. But unlike a local measurement, this process can never produce more position uncertainty than what was already there. It is only possible is to restrict the uncertainties in different ways, with different statistical properties, depending on what property of the distant particle you choose to measure. By restricting the uncertainty in p to be very small by a distant measurement, the remaining uncertainty in x stays large. (This example was actually the basis of Albert Einstein's important suggestion of the EPR paradox in 1935.)
This queer mechanism of quantum mechanics is the basis of
quantum cryptography, where the measurement of a value on one of two entangled particles at one location forces, via the uncertainty principle, a property of a distant particle to become indeterminate and hence unmeasurable.
But Heisenberg did not focus on the mathematics of quantum mechanics, he was primarily concerned with establishing that the uncertainty is actually a property of the world — that it is in fact physically impossible to measure the position and momentum of a particle to a precision better than that allowed by quantum mechanics. To do this, he used physical arguments based on the existence of quanta, but not the full quantum mechanical formalism.
This was a surprising prediction of quantum mechanics, and not yet accepted. Many people would have considered it a flaw that there are no states of definite position and momentum. Heisenberg was trying to show this was not a bug, but a feature—a deep, surprising aspect of the universe. To do this, he could not just use the mathematical formalism, because it was the mathematical formalism itself that he was trying to justify.

[edit] Heisenberg's microscope

Heisenberg's gamma-ray microscope for locating an electron (shown in blue). The incoming gamma ray (shown in green) is scattered by the electron up into the microscope's aperture angle θ. The scattered gamma-ray is shown in red. Classical optics shows that the electron position can be resolved only up to an uncertainty Δx that depends on θ and the wavelength λ of the incoming light.
Main article:
Heisenberg's microscope
One way in which Heisenberg originally argued for the uncertainty principle is by using an imaginary microscope as a measuring device.[3] He imagines an experimenter trying to measure the position and momentum of an electron by shooting a photon at it.
If the photon has a short
wavelength, and therefore a large momentum, the position can be measured accurately. But the photon scatters in a random direction, transferring a large and uncertain amount of momentum to the electron. If the photon has a long wavelength and low momentum, the collision doesn't disturb the electron's momentum very much, but the scattering will reveal its position only vaguely.
If a large
aperture is used for the microscope, the electron's location can be well resolved (see Rayleigh criterion); but by the principle of conservation of momentum, the transverse momentum of the incoming photon and hence the new momentum of the electron resolves poorly. If a small aperture is used, the accuracy of the two resolutions is the other way around.
The trade-offs imply that no matter what photon wavelength and aperture size are used, the product of the uncertainty in measured position and measured momentum is greater than or equal to a lower bound, which is up to a small numerical factor equal to
Planck's constant.[5] Heisenberg did not care to formulate the uncertainty principle as an exact bound, and preferred to use it as a heuristic quantitative statement, correct up to small numerical factors.

[edit] Critical reactions
Main article:
Einstein-Bohr debates
The Copenhagen interpretation of quantum mechanics and Heisenberg's Uncertainty Principle were in fact seen as twin targets by detractors who believed in an underlying determinism and realism. Within the Copenhagen interpretation of quantum mechanics, there is no fundamental reality the quantum state describes, just a prescription for calculating experimental results. There is no way to say what the state of a system fundamentally is, only what the result of observations might be.
Albert Einstein believed that randomness is a reflection of our ignorance of some fundamental property of reality, while Niels Bohr believed that the probability distributions are fundamental and irreducible, and depend on which measurements we choose to perform. Einstein and Bohr debated the uncertainty principle for many years.

[edit] Einstein's slit
The first of Einstein's thought experiments challenging the uncertainty principle went as follows:
Consider a particle passing through a slit of width d. The slit introduces an uncertainty in momentum of approximately h/d because the particle passes through the wall. But let us determine the momentum of the particle by measuring the recoil of the wall. In doing so, we find the momentum of the particle to arbitrary accuracy by conservation of momentum.
Bohr's response was that the wall is quantum mechanical as well, and that to measure the recoil to accuracy ΔP the momentum of the wall must be known to this accuracy before the particle passes through. This introduces an uncertainty in the position of the wall and therefore the position of the slit equal to h / ΔP, and if the wall's momentum is known precisely enough to measure the recoil, the slit's position is uncertain enough to disallow a position measurement.
A similar analysis with particles diffracting through multiple slits is given by
Richard Feynman[6].

[edit] Einstein's box
Another of Einstein's thought experiments was designed to challenge the time/energy uncertainty principle. It is very similar to the slit experiment in space, except here the narrow window the particle passes through is in time:
Consider a box filled with light. The box has a shutter that a clock opens and quickly closes at a precise time, and some of the light escapes. We can set the clock so that the time that the energy escapes is known. To measure the amount of energy that leaves, Einstein proposed weighing the box just after the emission. The missing energy
lessens the weight of the box. If the box is mounted on a scale, it is naively possible to adjust the parameters so that the uncertainty principle is violated.
Bohr spent a day considering this setup, but eventually realized that if the energy of the box is precisely known, the time the shutter opens at is uncertain. If the case, scale, and box are in a gravitational field then, in some cases, it is the uncertainty of the position of the clock in the gravitational field that alters the ticking rate. This can introduce the right amount of uncertainty. This was ironic, because it was Einstein himself who first discovered
gravity's effect on clocks.

[edit] EPR measurements
Bohr was compelled to modify his understanding of the uncertainty principle after another thought experiment by Einstein. In 1935, Einstein, Podolski and Rosen (see
EPR paradox) published an analysis of widely separated entangled particles. Measuring one particle, Einstein realized, would alter the probability distribution of the other, yet here the other particle could not possibly be disturbed. This example led Bohr to revise his understanding of the principle, concluding that the uncertainty was not caused by a direct interaction.[7]
But Einstein came to much more far-reaching conclusions from the same thought experiment. He believed as "natural basic assumption" that a complete description of reality would have to predict the results of experiments from "locally changing deterministic quantities", and therefore would have to include more information than the maximum possible allowed by the uncertainty principle.
In 1964
John Bell showed that this assumption can be falsified, since it would imply a certain inequality between the probabilities of different experiments. Experimental results confirm the predictions of quantum mechanics, ruling out Einstein's basic assumption that led him to the suggestion of his hidden variables. (Ironically this is one of the best examples for Karl Popper's philosophy of invalidation of a theory by falsification-experiments; i.e. here Einstein's "basic assumption" became falsified by experiments based on Bells inequalities; for the objections of Karl Popper against the Heisenberg inequality itself, see below.)
While it is possible to assume that quantum mechanical predictions are due to nonlocal hidden variables, and in fact
David Bohm invented such a formulation, this is not a satisfactory resolution for the vast majority of physicists. The question of whether a random outcome is predetermined by a nonlocal theory can be philosophical, and potentially intractable. If the hidden variables are not constrained, they could just be a list of random digits that are used to produce the measurement outcomes. To make it sensible, the assumption of nonlocal hidden variables is sometimes augmented by a second assumption — that the size of the observable universe puts a limit on the computations that these variables can do. A nonlocal theory of this sort predicts that a quantum computer encounters fundamental obstacles when it tries to factor numbers of approximately 10,000 digits or more; an achievable task in quantum mechanics[8].

[edit] Popper's criticism
Karl Popper criticized Heisenberg's form of the uncertainty principle, that a measurement of position disturbs the momentum, based on the following observation: if a particle with definite momentum passes through a narrow slit, the diffracted wave has some amplitude to go in the original direction of motion. If the momentum of the particle is measured after it goes through the slit, there is always some probability, however small, that the momentum will be the same as it was before.
Popper thinks of these rare events as
falsifications of the uncertainty principle in Heisenberg's original formulation. To preserve the principle, he concludes that Heisenberg's relation does not apply to individual particles or measurements, but only to many identically prepared particles, called ensembles. Popper's criticism applies to nearly all probabilistic theories, since a probabilistic statement requires many measurements to either verify or falsify.
Popper's criticism does not trouble physicists who subscribe to
Copenhagen interpretation. Popper's presumption is that the measurement is revealing some preexisting information about the particle, the momentum, which the particle already possesses. According to Copenhagen interpretation the quantum mechanical description the wavefunction is not a reflection of ignorance about the values of some more fundamental quantities, it is the complete description of the state of the particle. In this philosophical view, Popper's example is not a falsification, since after the particle diffracts through the slit and before the momentum is measured, the wavefunction is changed so that the momentum is still as uncertain as the principle demands.

[edit] Refinements

[edit] Entropic uncertainty principle
Main article:
Hirschman uncertainty
While formulating the many-worlds interpretation of quantum mechanics in 1957, Hugh Everett III discovered a much stronger formulation of the uncertainty principle[9]. In the inequality of standard deviations, some states, like the wavefunction
have a large standard deviation of position, but are actually a superposition of a small number of very narrow bumps. In this case, the momentum uncertainty is much larger than the standard deviation inequality would suggest. A better inequality uses the
Shannon information content of the distribution, a measure of the number of bits learned when a random variable described by a probability distribution has a certain value.
The interpretation of I is that the number of bits of information an observer acquires when the value of x is given to accuracy ε is equal to Ix + log2(ε). The second part is just the number of bits past the decimal point, the first part is a logarithmic measure of the width of the distribution. For a uniform distribution of width Δx the information content is log2Δx. This quantity can be negative, which means that the distribution is narrower than one unit, so that learning the first few bits past the decimal point gives no information since they are not uncertain.
Taking the logarithm of Heisenberg's formulation of uncertainty in
natural units.
but the lower bound is not precise.
Everett (and Hirschman
[10]) conjectured that for all quantum states:
This was proven by Beckner in 1975
[11].

[edit] Derivations
When linear operators A and B act on a function ψ(x), they don't always commute. A clear example is when operator B multiplies by x, while operator A takes the derivative with respect to x. Then
which in operator language means that
This example is important, because it is very close to the canonical commutation relation of quantum mechanics. There, the position operator multiplies the value of the wavefunction by x, while the corresponding momentum operator differentiates and multiplies by , so that:
It is the nonzero commutator that implies the uncertainty.
For any two operators A and B:
which is a statement of the
Cauchy-Schwarz inequality for the inner product of the two vectors and . The expectation value of the product AB is greater than the magnitude of its imaginary part:
and putting the two inequalities together for
Hermitian operators gives a form of the Robertson-Schrödinger relation:
and the uncertainty principle is a special case.

[edit] Physical interpretation
The inequality above acquires its physical interpretation:
where
is the
mean of observable X in the state ψ and
is the
standard deviation of observable X in the system state ψ.
By substituting for A and for B in the general operator norm inequality, since the imaginary part of the product, the commutator, is unaffected by the shift:
The big side of the inequality is the product of the norms of and , which in quantum mechanics are the standard deviations of A and B. The small side is the norm of the commutator, which for the position and momentum is just .

[edit] Matrix mechanics
In
matrix mechanics, the commutator of the matrices X and P is always nonzero, it is a constant multiple of the identity matrix. This means that it is impossible for a state to have a definite values x for X and p for P, since then XP would be equal to the number xp and would equal PX.
The commutator of two matrices is unchanged when they are shifted by a constant multiple of the identity — for any two real numbers x and p
Given any quantum state ψ, define the number x
to be the expected value of the position, and
to be the expected value of the momentum. The quantities and are only nonzero to the extent that the position and momentum are uncertain, to the extent that the state contains some values of X and P that deviate from the mean. The expected value of the commutator
can only be nonzero if the deviations in X in the state times the deviations in P are large enough.
The size of the typical matrix elements can be estimated by summing the squares over the energy states :
and this is equal to the square of the deviation, matrix elements have a size approximately given by the deviation.
So, to produce the canonical commutation relations, the product of the deviations in any state has to be about .
This heuristic estimate can be made into a precise inequality using the
Cauchy-Schwartz inequality, exactly as before. The inner product of the two vectors in parentheses:
is bounded above by the product of the lengths of each vector:
so, rigorously, for any state:
the real part of a matrix M is , so that the real part of the product of two Hermitian matrices is:
while the imaginary part is
The magnitude of is bigger than the magnitude of its imaginary part, which is the expected value of the imaginary part of the matrix:
Note that the uncertainty product is for the same reason bounded below by the expected value of the anticommutator, which adds a term to the uncertainty relation. The extra term is not as useful for the uncertainty of position and momentum, because it has zero expected value in a gaussian wavepacket, like the ground state of a harmonic oscillator. The anticommutator term is useful for bounding the uncertainty of spin operators though.

[edit] Wave mechanics
See also:
Fourier uncertainty principle
In Schrödinger's wave mechanics, the quantum mechanical wavefunction contains information about both the position and the momentum of the particle. The position of the particle is where the wave is concentrated, while the momentum is the typical wavelength.
The wavelength of a localized wave cannot be determined very well. If the wave extends over a region of size L and the wavelength is approximately λ, the number of cycles in the region is approximately L / λ. The inverse of the wavelength can be changed by about 1 / L without changing the number of cycles in the region by a full unit, and this is approximately the uncertainty in the inverse of the wavelength,
This is an exact counterpart to a well known result in
signal processing — the shorter a pulse in time, the less well defined the frequency. The width of a pulse in frequency space is inversely proportional to the width in time. It is a fundamental result in Fourier analysis, the narrower the peak of a function, the broader the Fourier transform.
Multiplying by h, and identifying ΔP = hΔ(1 / λ), and identifying ΔX = L.
The uncertainty Principle can be seen as a theorem in
Fourier analysis: the standard deviation of the squared absolute value of a function, times the standard deviation of the squared absolute value of its Fourier transform, is at least 1/(16π2) (Folland and Sitaram, Theorem 1.1).
An instructive example is the (unnormalized) gaussian wave-function
The expectation value of X is zero by symmetry, and so the variance is found by averaging X2 over all positions with the weight ψ(x)2, careful to divide by the normalization factor.
The Fourier transform of the Gaussian is the wavefunction in k space, where k is the wavenumber and is related to the momentum by DeBroglie's relation :
The last integral does not depend on p, because there is a continuous change of variables which removes the dependence, and this deformation of the integration path in the complex plane does not pass through any singularities. So up to normalization, the answer is again a Gaussian.
The width of the distribution in k is found in the same way as before, and the answer just flips A to 1/A.
so that for this example
which shows that the uncertainty relation inequality is tight. There are wavefunctions that saturate the bound.

[edit] Symplectic geometry

This section requires expansion.
In mathematical terms,
conjugate variables forms part of a symplectic basis, and the uncertainty principle corresponds to the symplectic form.

[edit] Robertson–Schrödinger relation
Given any two
Hermitian operators A and B, and a system in the state ψ, there are probability distributions for the value of a measurement of A and B, with standard deviations ΔψA and ΔψB. Then
where [A,B] = AB - BA is the
commutator of A and B, {A,B}= AB+BA is the anticommutator, and is the expectation value. This inequality is called the Robertson-Schrödinger relation, and includes the Heisenberg uncertainty principle as a special case. The inequality with the commutator term only was developed in 1930 by Howard Percy Robertson, and Erwin Schrödinger added the anticommutator term a little later.

[edit] Other uncertainty principles
The Robertson Schrödinger relation gives the uncertainty relation for any two observables that do not commute:
There is an uncertainty relation between the position and momentum of an object:
between the energy and position of a particle in a one-dimensional potential V(x):
between angular position and angular momentum of an object with small angular uncertainty:
between two orthogonal components of the
total angular momentum operator of an object:
where i, j, k are distinct and Ji denotes angular momentum along the xi axis.
between the number of electrons in a
superconductor and the phase of its Ginzburg-Landau order parameter[12][13]

[edit] Energy-time uncertainty principle
One well-known uncertainty relation is not an obvious consequence of the Robertson-Schrödinger relation: the energy-time uncertainty principle.
Since energy bears the same relation to time as momentum does to space in
special relativity, it was clear to many early founders, Niels Bohr among them, that the following relation holds:
but it was not obvious what Δt is, because the time at which the particle has a given state is not an operator belonging to the particle, it is a parameter describing the evolution of the system. As
Lev Landau once joked "To violate the time-energy uncertainty relation all I have to do is measure the energy very precisely and then look at my watch!"
Nevertheless, Einstein and Bohr understood the heuristic meaning of the principle. A state that only exists for a short time cannot have a definite energy. To have a definite energy, the frequency of the state must accurately be defined, and this requires the state to hang around for many cycles, the reciprocal of the required accuracy.
For example, in
spectroscopy, excited states have a finite lifetime. By the time-energy uncertainty principle, they do not have a definite energy, and each time they decay the energy they release is slightly different. The average energy of the outgoing photon has a peak at the theoretical energy of the state, but the distribution has a finite width called the natural linewidth. Fast-decaying states have a broad linewidth, while slow decaying states have a narrow linewidth.
The broad linewidth of fast decaying states makes it difficult to accurately measure the energy of the state, and researchers have even used microwave cavities to slow down the decay-rate, to get sharper peaks
[14]. The same linewidth effect also makes it difficult to measure the rest mass of fast decaying particles in particle physics. The faster the particle decays, the less certain is its mass.
One false formulation of the energy-time uncertainty principle says that measuring the energy of a quantum system to an accuracy ΔE requires a time interval Δt > h / ΔE. This formulation is similar to the one alluded to in Landau's joke, and was explicitly invalidated by
Y. Aharonov and D. Bohm in 1961. The time Δt in the uncertainty relation is the time during which the system exists unperturbed, not the time during which the experimental equipment is turned on.
In 1936, Dirac offered a precise definition and derivation of the time-energy uncertainty relation, in a relativistic quantum theory of "events". In this formulation, particles followed a trajectory in space time, and each particle's trajectory was parametrized independently by a different proper time. The
many-times formulation of quantum mechanics is mathematically equivalent to the standard formulations, but it was in a form more suited for relativistic generalization. It was the inspiration for Shin-Ichiro Tomonaga's covariant perturbation theory for quantum electrodynamics.
But a better-known, more widely-used formulation of the time-energy uncertainty principle was given only in 1945 by
L. I. Mandelshtam and I. E. Tamm, as follows.[15] For a quantum system in a non-stationary state and an observable B represented by a self-adjoint operator , the following formula holds:
where ΔψE is the standard deviation of the energy operator in the state , ΔψB stands for the standard deviation of the operator and is the expectation value of in that state. Although, the second factor in the left-hand side has dimension of time, it is different from the time parameter that enters
Schrödinger equation. It is a lifetime of the state with respect to the observable B. In other words, this is the time after which the expectation value changes appreciably.

[edit] Uncertainty theorems in harmonic analysis
In the context of
harmonic analysis, the uncertainty principle implies that one cannot at the same time localize the value of a function and its Fourier transform; to wit, the following inequality holds
Other purely mathematical formulations of uncertainty exist between a function ƒ and its Fourier transform. A variety of such results can be found in (
Havin & Jöricke 1994) or (Folland & Sitaram 1997); for a short survey, see (Sitaram 2001).

[edit] Benedicks's theorem
Benedicks's theorem (
Benedicks 1985) intuitively says that the set of points where ƒ is non-zero and the set of points where is nonzero cannot both be small. Specifically, it is impossible for a function ƒ in L2(R) and its Fourier transform to both be supported on sets of finite Lebesgue measure. In signal processing, this result is well-known: a function cannot be both time limited and band limited.

[edit] Hardy's uncertainty principle
The mathematician
G. H. Hardy (Hardy 1933) formulated the following uncertainty principle: it is not possible for ƒ and to both be "very rapidly decreasing." Specifically, if ƒ is in L2(R), then one has
unless ƒ = 0. Note that this result is sharp, since the Fourier transform of ƒ0(x) = e−πx2 is equal to e−πξ2: if e2π xξ  is replaced in the integral above by ea xξ , for any a <>

Can you hear me?According to the Boshongo people of central Africa, in the beginning, there was only darkness, water, and the great god Bumba. One day Bumba, in pain from a stomach ache, vomited up the sun. The sun dried up some of the water, leaving land. Still in pain, Bumba vomited up the moon, the stars, and then some animals. The leopard, the crocodile, the turtle, and finally, man. This creation myth, like many others, tries to answer the questions we all ask. Why are we here? Where did we come from? The answer generally given was that humans were of comparatively recent origin, because it must have been obvious, even at early times, that the human race was improving in knowledge and technology. So it can't have been around that long, or it would have progressed even more. For example, according to


Bishop Usher, the Book of Genesis placed the creation of the world at 9 in the morning on October the 27th, 4,004 BC. On the other hand, the physical surroundings, like mountains and rivers, change very little in a human lifetime. They were therefore thought to be a constant background, and either to have existed forever as an empty landscape, or to have been created at the same time as the humans. Not everyone, however, was happy with the idea that the universe had a beginning. For example, Aristotle, the most famous of the Greek philosophers, believed the universe had existed forever. Something eternal is more perfect than something created. He suggested the reason we see progress was that floods, or other natural disasters, had repeatedly set civilization back to the beginning. The motivation for believing in an eternal universe was the desire to avoid invoking divine intervention to create the universe and set it going. Conversely, those who believed the universe had a beginning, used it as an argument for the existence of God as the first cause, or prime mover, of the universe. If one believed that the universe had a beginning, the obvious question was what happened before the beginning? What was God doing before He made the world? Was He preparing Hell for people who asked such questions? The problem of whether or not the universe had a beginning was a great concern to the German philosopher, Immanuel Kant. He felt there were logical contradictions, or antimonies, either way. If the universe had a beginning, why did it wait an infinite time before it began? He called that the thesis. On the other hand, if the universe had existed for ever, why did it take an infinite time to reach the present stage? He called that the antithesis. Both the thesis and the antithesis depended on Kant's assumption, along with almost everyone else, that time was Absolute. That is to say, it went from the infinite past to the infinite future, independently of any universe that might or might not exist in this background. This is still the picture in the mind of many scientists today. However in 1915, Einstein introduced his revolutionary General Theory of Relativity. In this, space and time were no longer Absolute, no longer a fixed background to events. Instead, they were dynamical quantities that were shaped by the matter and energy in the universe. They were defined only within the universe, so it made no sense to talk of a time before the universe began. It would be like asking for a point south of the South Pole. It is not defined. If the universe was essentially unchanging in time, as was generally assumed before the 1920s, there would be no reason that time should not be defined arbitrarily far back. Any so-called beginning of the universe would be artificial, in the sense that one could extend the history back to earlier times. Thus it might be that the universe was created last year, but with all the memories and physical evidence, to look like it was much older. This raises deep philosophical questions about the meaning of existence. I shall deal with these by adopting what is called, the positivist approach. In this, the idea is that we interpret the input from our senses in terms of a model we make of the world. One can not ask whether the model represents reality, only whether it works. A model is a good model if first it interprets a wide range of observations, in terms of a simple and elegant model. And second, if the model makes definite predictions that can be tested and possibly falsified by observation. In terms of the positivist approach, one can compare two models of the universe. One in which the universe was created last year and one in which the universe existed much longer. The Model in which the universe existed for longer than a year can explain things like identical twins that have a common cause more than a year ago. On the other hand, the model in which the universe was created last year cannot explain such events. So the first model is better. One can not ask whether the universe really existed before a year ago or just appeared to. In the positivist approach, they are the same. In an unchanging universe, there would be no natural starting point. The situation changed radically however, when Edwin Hubble began to make observations with the hundred inch telescope on Mount Wilson, in the 1920s. Hubble found that stars are not uniformly distributed throughout space, but are gathered together in vast collections called galaxies. By measuring the light from galaxies, Hubble could determine their velocities. He was expecting that as many galaxies would be moving towards us as were moving away. This is what one would have in a universe that was unchanging with time. But to his surprise, Hubble found that nearly all the galaxies were moving away from us. Moreover, the further galaxies were from us, the faster they were moving away. The universe was not unchanging with time as everyone had thought previously. It was expanding. The distance between distant galaxies was increasing with time. The expansion of the universe was one of the most important intellectual discoveries of the 20th century, or of any century. It transformed the debate about whether the universe had a beginning. If galaxies are moving apart now, they must have been closer together in the past. If their speed had been constant, they would all have been on top of one another about 15 billion years ago. Was this the beginning of the universe? Many scientists were still unhappy with the universe having a beginning because it seemed to imply that physics broke down. One would have to invoke an outside agency, which for convenience, one can call God, to determine how the universe began. They therefore advanced theories in which the universe was expanding at the present time, but didn't have a beginning. One was the Steady State theory, proposed by Bondi, Gold, and Hoyle in 1948.In the Steady State theory, as galaxies moved apart, the idea was that new galaxies would form from matter that was supposed to be continually being created throughout space. The universe would have existed for ever and would have looked the same at all times. This last property had the great virtue, from a positivist point of view, of being a definite prediction that could be tested by observation. The Cambridge radio astronomy group, under Martin Ryle, did a survey of weak radio sources in the early 1960s. These were distributed fairly uniformly across the sky, indicating that most of the sources lay outside our galaxy. The weaker sources would be further away, on average. The Steady State theory predicted the shape of the graph of the number of sources against source strength. But the observations showed more faint sources than predicted, indicating that the density sources were higher in the past. This was contrary to the basic assumption of the Steady State theory, that everything was constant in time. For this, and other reasons, the Steady State theory was abandoned. Another attempt to avoid the universe having a beginning was the suggestion that there was a previous contracting phase, but because of rotation and local irregularities, the matter would not all fall to the same point. Instead, different parts of the matter would miss each other, and the universe would expand again with the density remaining finite. Two Russians, Lifshitz and Khalatnikov, actually claimed to have proved, that a general contraction without exact symmetry would always lead to a bounce with the density remaining finite. This result was very convenient for Marxist Leninist dialectical materialism, because it avoided awkward questions about the creation of the universe. It therefore became an article of faith for Soviet scientists. When Lifshitz and Khalatnikov published their claim, I was a 21 year old research student looking for something to complete my PhD thesis. I didn't believe their so-called proof, and set out with Roger Penrose to develop new mathematical techniques to study the question. We showed that the universe couldn't bounce. If Einstein's General Theory of Relativity is correct, there will be a singularity, a point of infinite density and spacetime curvature, where time has a beginning. Observational evidence to confirm the idea that the universe had a very dense beginning came in October 1965, a few months after my first singularity result, with the discovery of a faint background of microwaves throughout space. These microwaves are the same as those in your microwave oven, but very much less powerful. They would heat your pizza only to minus 271 point 3 degrees centigrade, not much good for defrosting the pizza, let alone cooking it. You can actually observe these microwaves yourself. Set your television to an empty channel. A few percent of the snow you see on the screen will be caused by this background of microwaves. The only reasonable interpretation of the background is that it is radiation left over from an early very hot and dense state. As the universe expanded, the radiation would have cooled until it is just the faint remnant we observe today. Although the singularity theorems of Penrose and myself, predicted that the universe had a beginning, they didn't say how it had begun. The equations of General Relativity would break down at the singularity. Thus Einstein's theory cannot predict how the universe will begin, but only how it will evolve once it has begun. There are two attitudes one can take to the results of Penrose and myself. One is to that God chose how the universe began for reasons we could not understand. This was the view of Pope John Paul. At a conference on cosmology in the Vatican, the Pope told the delegates that it was OK to study the universe after it began, but they should not inquire into the beginning itself, because that was the moment of creation, and the work of God. I was glad he didn't realize I had presented a paper at the conference suggesting how the universe began. I didn't fancy the thought of being handed over to the Inquisition, like Galileo. The other interpretation of our results, which is favored by most scientists, is that it indicates that the General Theory of Relativity breaks down in the very strong gravitational fields in the early universe. It has to be replaced by a more complete theory. One would expect this anyway, because General Relativity does not take account of the small scale structure of matter, which is governed by quantum theory. This does not matter normally, because the scale of the universe is enormous compared to the microscopic scales of quantum theory. But when the universe is the Planck size, a billion trillion trillionth of a centimeter, the two scales are the same, and quantum theory has to be taken into account. In order to understand the Origin of the universe, we need to combine the General Theory of Relativity with quantum theory. The best way of doing so seems to be to use Feynman's idea of a sum over histories. Richard Feynman was a colorful character, who played the bongo drums in a strip joint in Pasadena, and was a brilliant physicist at the California Institute of Technology. He proposed that a system got from a state A, to a state B, by every possible path or history. Each path or history has a certain amplitude or intensity, and the probability of the system going from A- to B, is given by adding up the amplitudes for each path. There will be a history in which the moon is made of blue cheese, but the amplitude is low, which is bad news for mice. The probability for a state of the universe at the present time is given by adding up the amplitudes for all the histories that end with that state. But how did the histories start? This is the Origin question in another guise. Does it require a Creator to decree how the universe began? Or is the initial state of the universe, determined by a law of science? In fact, this question would arise even if the histories of the universe went back to the infinite past. But it is more immediate if the universe began only 15 billion years ago. The problem of what happens at the beginning of time is a bit like the question of what happened at the edge of the world, when people thought the world was flat. Is the world a flat plate with the sea pouring over the edge? I have tested this experimentally. I have been round the world, and I have not fallen off. As we all know, the problem of what happens at the edge of the world was solved when people realized that the world was not a flat plate, but a curved surface. Time however, seemed to be different. It appeared to be separate from space, and to be like a model railway track. If it had a beginning, there would have to be someone to set the trains going. Einstein's General Theory of Relativity unified time and space as spacetime, but time was still different from space and was like a corridor, which either had a beginning and end, or went on forever. However, when one combines General Relativity with Quantum Theory, Jim Hartle and I realized that time can behave like another direction in space under extreme conditions. This means one can get rid of the problem of time having a beginning, in a similar way in which we got rid of the edge of the world. Suppose the beginning of the universe was like the South Pole of the earth, with degrees of latitude playing the role of time. The universe would start as a point at the South Pole. As one moves north, the circles of constant latitude, representing the size of the universe, would expand. To ask what happened before the beginning of the universe would become a meaningless question, because there is nothing south of the South Pole. Time, as measured in degrees of latitude, would have a beginning at the South Pole, but the South Pole is much like any other point, at least so I have been told. I have been to Antarctica, but not to the South Pole. The same laws of Nature hold at the South Pole as in other places. This would remove the age-old objection to the universe having a beginning; that it would be a place where the normal laws broke down. The beginning of the universe would be governed by the laws of science. The picture Jim Hartle and I developed of the spontaneous quantum creation of the universe would be a bit like the formation of bubbles of steam in boiling water. The idea is that the most probable histories of the universe would be like the surfaces of the bubbles. Many small bubbles would appear, and then disappear again. These would correspond to mini universes that would expand but would collapse again while still of microscopic size. They are possible alternative universes but they are not of much interest since they do not last long enough to develop galaxies and stars, let alone intelligent life. A few of the little bubbles, however, grow to a certain size at which they are safe from recollapse. They will continue to expand at an ever increasing rate, and will form the bubbles we see. They will correspond to universes that would start off expanding at an ever increasing rate. This is called inflation, like the way prices go up every year. The world record for inflation was in Germany after the First World War. Prices rose by a factor of ten million in a period of 18 months. But that was nothing compared to inflation in the early universe. The universe expanded by a factor of million trillion trillion in a tiny fraction of a second. Unlike inflation in prices, inflation in the early universe was a very good thing. It produced a very large and uniform universe, just as we observe. However, it would not be completely uniform. In the sum over histories, histories that are very slightly irregular will have almost as high probabilities as the completely uniform and regular history. The theory therefore predicts that the early universe is likely to be slightly non-uniform. These irregularities would produce small variations in the intensity of the microwave background from different directions. The microwave background has been observed by the Map satellite, and was found to have exactly the kind of variations predicted. So we know we are on the right lines. The irregularities in the early universe will mean that some regions will have slightly higher density than others. The gravitational attraction of the extra density will slow the expansion of the region, and can eventually cause the region to collapse to form galaxies and stars. So look well at the map of the microwave sky. It is the blue print for all the structure in the universe. We are the product of quantum fluctuations in the very early universe. God really does play dice. We have made tremendous progress in cosmology in the last hundred years. The General Theory of Relativity and the discovery of the expansion of the universe shattered the old picture of an ever existing and ever lasting universe. Instead, general relativity predicted that the universe, and time itself, would begin in the big bang. It also predicted that time would come to an end in black holes. The discovery of the cosmic microwave background and observations of black holes support these conclusions. This is a profound change in our picture of the universe and of reality itself. Although the General Theory of Relativity predicted that the universe must have come from a period of high curvature in the past, it could not predict how the universe would emerge from the big bang. Thus general relativity on its own cannot answer the central question in cosmology: Why is the universe the way it is? However, if general relativity is combined with quantum theory, it may be possible to predict how the universe would start. It would initially expand at an ever increasing rate.During this so called inflationary period, the marriage of the two theories predicted that small fluctuations would develop and lead to the formation of galaxies, stars, and all the other structure in the universe. This is confirmed by observations of small non uniformities in the cosmic microwave background, with exactly the predicted properties. So it seems we are on our way to understanding the origin of the universe, though much more work will be needed. A new window on the very early universe will be opened when we can detect gravitational waves by accurately measuring the distances between space craft. Gravitational waves propagate freely to us from earliest times, unimpeded by any intervening material. By contrast, light is scattered many times by free electrons. The scattering goes on until the electrons freeze out, after 300,000 years. Despite having had some great successes, not everything is solved. We do not yet have a good theoretical understanding of the observations that the expansion of the universe is accelerating again, after a long period of slowing down. Without such an understanding, we cannot be sure of the future of the universe. Will it continue to expand forever? Is inflation a law of Nature? Or will the universe eventually collapse again? New observational results and theoretical advances are coming in rapidly. Cosmology is a very exciting and active subject. We are getting close to answering the age old questions. Why are we here? Where did we come from?

Thank you for listening to me.