EEL 4930 - PowerPoint PPT Presentation

About This Presentation
Title:

EEL 4930

Description:

EEL 4930 6 / 5930 5, Spring 06. Physical Limits of Computing. Slides for a ... Department of Electrical & Computer Engineering. http://www.eng.fsu.edu ... 'Bra' ... – PowerPoint PPT presentation

Number of Views:114
Avg rating:3.0/5.0
Slides: 76
Provided by: Michael2156
Learn more at: https://eng.fsu.edu
Category:
Tags: eel | bra

less

Transcript and Presenter's Notes

Title: EEL 4930


1
EEL 4930 6 / 5930 5, Spring 06Physical Limits
of Computing
http//www.eng.fsu.edu/mpf
  • Slides for a course taught byMichael P. Frankin
    the Department of Electrical Computer
    Engineering

2
Physical Limits of ComputingCourse Outline
Currently I am working on writing up a set of
course notes based on this outline,intended to
someday evolve into a textbook
  • Course Introduction
  • Moores Law vs. Modern Physics
  • Foundations
  • Required Background Material in Computing
    Physics
  • Fundamentals
  • The Deep Relationships between Physics and
    Computation
  • IV. Core Principles
  • The two Revolutionary Paradigms of Physical
    Computation
  • V. Technologies
  • Present and Future Physical Mechanisms for the
    Practical Realization of Information Processing
  • VI. Conclusion

3
Part II. Foundations
  • This first part of the course quickly reviews
    some key background knowledge that you will need
    to be familiar with in order to follow the later
    material.
  • You may have seen some of this material before.
  • Part II is divided into two chapters
  • Chapter II.A. The Theory of Information and
    Computation
  • Chapter II.B. Required Physics Background

4
Chapter II.B. Required Physics Background
  • This chapter covers All the Physics You Need to
    Know, for purposes of this course
  • II.B.1. Physical Quantities, Units, and
    Constants
  • II.B.2. Modern Formulations of Mechanics
  • II.B.3. Basics of Relativity Theory
  • II.B.4. Basics of Quantum Mechanics
  • II.B.5. Thermodynamics Statistical Mechanics
  • II.B.6. Solid-State Physics

5
Section II.B.4Basics of Quantum Mechanics
  • Systems, Hilbert Spaces, Measurement,
    Observables, Time Evolution, Entanglement,
    Quantum Information

6
Section II.B.4 Basics of Quantum Mechanics
  • We break this down into subsections as follows
  • (a) Systems, subsystems, states, descriptions
  • (b) State vectors and Hilbert spaces
  • (c) Measurement and Observables
  • (d) Unitary time evolution and wave equation
  • (e) Compound systems and Entanglement
  • (f) The nature of Quantum Information

7
Subsection II.B.4.aSystems, Subsystems, States,
Descriptions
  • Merge this section into the discussion of forms
    in the Information module?

8
Systems and Subsystems
  • Intuitively speaking, a physical system consists
    of a region of spacetime all the entities (e.g.
    particles fields) contained within it.
  • The universe (over all time) is a physical system
  • Transistors, computers, people also phys.
    systems
  • One physical system A is a subsystem of another
    system B (write A?B) iff As region is
    completely contained within Bs.
  • Later, we will make these definitions a bit more
    general, formal precise.

B
A
9
Closed vs. Open Systems
  • A subsystem is closed to the extent that no
    particles, information, energy, or entropy (terms
    to be defined) enter or leave the system.
  • The universe is (presumably) a closed system.
  • Subsystems of the universe may be almost closed
  • Often in physics we consider statements about
    closed systems.
  • These statements may often be perfectly true only
    in a perfectly closed system.
  • However, they will often also be approximately
    true in any nearly closed system (in a
    well-defined way)

10
Concrete vs. Abstract Systems
  • Usually, when reasoning about or interacting with
    a system, an entity (e.g. a physicist) has in
    mind a description of the system.
  • A description that contains every property of the
    system is an exact or concrete description.
  • That system (to the entity) is a concrete system.
  • Other descriptions are abstract descriptions.
  • The system (as considered by that entity) is an
    abstract system, to some degree.
  • We nearly always deal with abstract systems!
  • Based on the descriptions that are available to
    us.

11
States State Spaces
  • A possible state S of an abstract system A
    (described by a description D) is any concrete
    system C that is consistent with D.
  • I.e., the system in question could be fully
    fleshed out by the more detailed description of
    C.
  • The state space of the abstract system A is the
    set of all possible states of A.
  • So far, all the concepts weve discussed can be
    applied to either classical or quantum physics
  • Now, lets get to the uniquely quantum stuff

12
System Descriptions in Classical and Quantum
Physics
  • Classical physics
  • A concrete instance of a system could be
    completely described by giving a single state S
    out of the set ? of all possible states.
  • Statistical mechanics
  • A description can be to give a probability
    distribution function p??0,1 stating only
    that the system is in state S with probability
    p(S).
  • Note this is more abstract than giving the exact
    state.
  • Constraint on p The probabilities sum to 1.
  • Quantum mechanics
  • A description can be a complex-valued
    wavefunction ?? ? C, where ?(S)?1, implying
    that the system is in state S with probability
    ?(S)2.
  • This is more concrete than a probability
    distribution,but less concrete than an exact
    classical state.
  • Constraint Squared magnitudes sum to 1.

S
S
S
p
1
0
S
?
i
-1
1
0
-i
13
Distinguishability of States
  • Classical quantum mechanics differ crucially
    regarding the distinguishability of states.
  • In classical mechanics, there is no issue
  • Any two states s,t are either the same (st), or
    different (s?t), and thats all there is to it.
  • In quantum mechanics (i.e. in reality)
  • There are pairs of states s?t that are
    mathematically distinct, but not 100 physically
    distinguishable.
  • They are slightly different but still
    overlapping
  • Such states cannot be reliably distinguished by
    any kind of measurement, no matter how precise!
  • But you can know the real state (with high
    probability), if you prepared the system to be in
    a certain state to begin with.
  • We know that these slightly different,
    overlapping states really exist because we can
    see their different statistical properties given
    many identically-prepared systems (instances of
    the form).

14
Subsection II.B.4.bState Vectors and Hilbert
Spaces
  • Vector Spaces, Complex Numbers, Hilbert Spaces,
    Ket Notation, Distinguishability

15
State Vectors Hilbert Space
  • Let S be any maximal set of distinguishable
    possible states s, t, of an abstract system A.
  • Maximal in the sense that no possible state that
    is not in S is perfectly distinguishable from all
    members of S.
  • Now, identify the elements of S with unit-length,
    mutually-orthogonal (basis) vectors in an
    abstract complex vector space H.
  • This is called the systems Hilbert space
  • Postulate 1 Any possible state ? ofsystem A can
    be identified with a unit-length vector in the
    Hilbert space H.

t
s
?
16
(Abstract) Vector Spaces
  • A concept from abstract linear algebra.
  • Other concepts in abstract algebra Groups,
    rings, fields, algebras
  • A vector space, in the abstract, is any set of
    objects that can be combined like vectors, i.e.
  • You can add them
  • Addition is associative commutative
  • Identity law holds for addition to a unique null
    vector 0
  • You can multiply them by scalars (including ?1)
  • Associative, commutative, and distributive laws
    hold
  • Note A vector space has no inherent basis (set
    of axes)!
  • The vectors themselves are considered to be
    fundamental, geometric objects, rather than being
    just lists of coordinates
  • E.g., in the below example, vector v in a
    2-dimensional real vector space can be expressed
    as a linear combination of any given pair of
    basis vectors i, j.

17
Hilbert spaces
  • A Hilbert space H is a vector space in which the
    scalars are complex numbers, with an inner
    product (dot product) operation ? HH ? C
  • See Hirvensalo p. 107 for defn. of inner product
  • x?y (y?x) ( complex conjugate)
  • x?x ? 0
  • x?x 0 if and only if x 0
  • x?y is linear, under scalar multiplication
    and vector addition within either x or y

implies
Componentpicture
y
Another notation often used
x
bracket
x?y/x
18
Review The Complex Number System
  • It is the extension of the real number system via
    closure under exponentiation.
  • (Complex) conjugate
  • c (a bi) ? (a ? bi)
  • Magnitude or absolute value
  • c2 cc a2b2

The imaginaryunit
i
c
b
?1
1
a
Real axis
Imaginaryaxis
?i
19
Review Complex Exponentiation
  • Raising any real number xgt0 to the ith power
    yields a unit-magnitude complex number
  • Note
  • e?i/2 i
  • e?i ?1
  • e3? i /2 ? i
  • e2? i e0 1

Eulers relation
(Where ? ln x.)
i
e?i
?
1
?1
?i
If we want, we can say that angles are
logarithmicquantities, identify the angle 1 rad
with the logarithmic unit Loge, and write xi
ExpiL CosL i SinL,where LLogx, and
the Cos and Sin functions now take
dimensioned arguments (indefinite log quantities).
20
Vector Representation of States
  • Let Ss0, s1, be any maximal set of
    mutually distinguishable states, indexed by i.
  • A basis vector vi identified with the ith such
    state can be represented as a list of numbers
  • s0 s1 s2 si-1 si si1
  • vi (0, 0, 0, , 0, 1, 0, )
  • Arbitrary vectors v in the Hilbert space H can
    then be defined by linear combinations of the vi
  • And the inner product is given by

21
Diracs Ket Notation
  • Notice that in the expression for inner product
  • Its the same as the matrix product of x, as a
    conjugated row vector, times y, as a normal
    column vector.
  • This leads to the definition, for state s, of
  • The bra ?s means the row vector c0 c1
  • The ket s? means the column vector
  • The adjoint operator takes any matrix M to its
    conjugate transpose M ? MT, so ?s can be
    defined as s?, and we have x?y xy.

Bracket
Innerproduct
Bra ?x
Ket y?
22
Distinguishability of States, More Formally
  • State vectors s and t are (perfectly) mutually
    distinguishable or orthogonal (write s?t) iff st
    0.
  • Their inner product has zero magnitude.
  • State vectors s and t are perfectly
    indistinguish-able or equivalent (write st) iff
    st 1.
  • Their inner product has unit magnitude.
  • Otherwise, s and t are non-orthogonal to each
    other, and also inequivalent.
  • We say they are not perfectly distinguishable.
  • In such a case we say, the amplitude of state s,
    given state t, is st.
  • Note Amplitudes are complex numbers!

23
Subsection II.B.4.cMeasurement and Observables
  • Measurement Postulate, Schrödingers Cat,
    Operators, Eigenvalues, Eigenvectors, Observables

24
Probability and Measurement
  • A yes/no measurement is an interaction
    designed to determine whether a given system
    is in a certain state s.
  • The amplitude of state s, given the actual state
    t of the system determines the probability
    of getting a yes from the measurement.
  • Postulate 2 For a system prepared in state t,
    any measurement that asks is it in state s?
    will say yes with probability P(st) st2
  • After the measurement, the state is changed, in a
    way we will discuss later.

25
A Simple Example
  • Suppose abstract system S has a set of only 4
    distinguishable possible states,
  • well call them s0, s1, s2, and s3, with
    corresponding ket vectors s0?, s1?, s2?, and
    s3?.
  • Another possible state is then the unit vector
  • Which is equal to the column matrix
  • If measured to see if it is in state s0, we
    have a 50 chance of getting a yes!

26
Schrödingers Cat
  • A thought experiment that illustrates the
    unintuitive nature of quantum states.
  • An apparatus is set up tokill a cat if an atom
    decays in a certain time (50 prob.).
  • The system enters the quantum superposition
    state live cat? dead cat?.
  • We cant say that the cat is really either
    alive or dead until we open the box and observe
    it.
  • Even then, the true state can validly be
    considered to be we see live cat? we see
    dead cat?.
  • Outwardly-spreading entanglement ? Many-worlds
    picture

27
Linear Operators
  • Given V,W Vector spaces,
  • Definition A linear operator A from V to W is a
    linear function AV?W.
  • An operator on V is an operator from V to itself,
    AV?V.
  • Given bases for V and W, we can represent linear
    operators as matrices.
  • An Hermitian operator H on V is a linear operator
    that is self-adjoint (HH).
  • Its diagonal matrix elements are all real.

28
Eigenvalues Eigenvectors
  • Vector v is called an eigenvector of linear
    operator A iff A just multiplies v by a scalar a,
    i.e. Avav
  • eigen (German) means characteristic
  • The eigenvalue a corresponding to eigenvector v,
    is just the scalar that A multiplies v by
  • The eigenvalue a is called degenerate if it is
    shared by at least two independent eigenvectors
    (ones that arent just scalar multiples of each
    other).
  • The multiplicity of a is the number of
    independent eigenvectors that share it.
  • The eigenvectors of any Hermitian operator H are
    all real-valued and mutually orthogonal.

29
Observables
  • A Hermitian operator H on the vector space V is
    called an observable if there is an orthonormal
    (all unit-length, and mutually orthogonal) subset
    of its eigenvectors that forms a basis for V.
  • Postulate 3 Every measurable physical property
    of a system can be described by a corresponding
    observable H. The different possible outcomes of
    the measurement correspond to different
    eigenvalues of H.
  • The measurement can also be thought of as a set
    of yes-no tests that compares the state with each
    of the observables normalized eigenvectors.

30
Subsection II.B.4.dTime Evolution and the
Schrödinger Wave Equation
  • Wavefunctions, Unitary Transformations, Time
    Evolution Operator, Schrödinger Equation

31
Wavefunctions
  • Given any set S?H of system states,
  • whether all mutually distinguishable, or not,
  • Any quantum state vector v in the systems
    Hilbert space can be translated to a
    corresponding wave-function ?S?C,
  • This gives, for each state s?S, the amplitude
    ?(s) of that state, given that the actual
    system state is v.
  • If s corresponds to state vector s, then?(s)
    sv.
  • If S includes a basis set, ? also uniquely
    determines v.
  • The function ? is called a wavefunction
    because
  • As well see, its dynamics takes the form of a
    wave equation when S ranges over a space of
    positional states.

32
Quantum Dynamics
  • A dynamics is a law that determines how states
    change over time.
  • E.g., the Euler-Lagrange equations or Hamiltons
    equations, given a suitable Lagrangian or
    Hamiltonian.
  • We have seen that quantum states are unit vectors
    in a Hilbert space.
  • How then should such states evolve?
  • Let us begin by supposing that the dynamical law
    for quantum time-evolution should have the
    following properties
  • Linear - It should involve a linear
    transformation of the vector space.
  • This is the simplest type of dynamics, and it
    appears sufficient
  • Norm-conserving The sum of squared component
    magnitudes should be constant
  • Since squared magnitude probability, and total
    probability must always be 1
  • Invertible The dynamics should be one-to-one
  • Reflects the apparent reversibility of physics
  • As evidenced by the success of Hamiltonian
    dynamics and the 2nd law of thermodynamics.
  • Continuous The state should only change
    infinitesimally in infinitesimal times.
  • This is another apparent property of the world
  • Time-independent The law should not change over
    time
  • Apparently true, also, its kind of what we mean
    by a law to begin with

33
Unitary Transformations
  • A matrix (or linear operator) U is called unitary
    iff its inverse equals its adjoint, that is, U?1
    U
  • Some nice properties of unitary transformations
  • They are invertible and bijective (one-to-one and
    onto).
  • The set of row vectors comprises an orthonormal
    basis.
  • Ditto for the set of column vectors.
  • Preserves vector length U? ?
  • Therefore, also preserves total probability over
    all states
  • Implements a change of basis,
  • from one orthonormal basis to another.
  • Can be thought of as a kind of generalized
    rotation of? in Hilbert space.
  • Mathematical fact Any norm-preserving,
    invertible linear transformation of a complex
    vector space is unitary!
  • Thus our dynamics taking us from t1 to t2 must be
    a unitary operator

34
The Time Evolution Operator
  • Since the dynamics is supposed to be
    time-independent, U(t1?t2) cant depend on the
    absolute value of t1 or t2, but can only depend
    on ?t t2 - t1.
  • Thus we can write U(t1?t2) U(?t).
  • Also, note that U(2?t)U(?t)U(?t) U(?t)2,
  • And more generally U(n?t) U(?t)n.
  • Thus, all ?t values can likewise be viewed as
    multiples of some infinitesimal time increment
    dt.
  • Therefore, U(?t) Ud?t for some infinitesimal
    (near-identity) base unitary Ud, and with ?t
    expressed in dt units.
  • Well see its convenient to express the base
    unitary Ud itself as the exponential of a matrix,
    Ud eM
  • This will let us write U(?t) eM?t.
  • Note that ?t can even remain dimensioned here, so
    long as we arrange for M to have dimensions of
    inverse time.

35
The Exponential of a Matrix
  • Recall the Taylor series for theexponential
    function, ex, for x?R
  • We can use this equation to also define the
    exponential of a matrix M in terms of powers of
    M
  • A useful theorem Any eigenvector v of M having
    eigenvalue a is also an eigenvector of eM, with
    the eigenvalue ea

36
Time-Evolution Unitaries and Hermitian
Hamiltonian Operators
  • Let v be an eigenvector of the matrix M we just
    discussed, where Ud eM.
  • Thus, v is also an eigenvector of Ud.
  • We can ask, what is vs eigenvalue u under Ud?
  • Since Ud is length-preserving, u must be some
    complex unit, u ei?, so that we will have u
    1.
  • Otherwise, we wouldnt have Udv uvv v.
  • Thus, vs eigenvalue under M must be i? for some
    real number ?.
  • By the theorem on the previous slide, this is the
    only way that its eigenvalue under Ud can have
    the form ei?!
  • In other words, all of Ms eigenvalues are
    imaginary.
  • Thus, we can write M iH, where H is a matrix
    with all real eigenvalues
  • i.e., H is an Hermitian matrix.
  • So, we can write U(?t) eiH?t.
  • Note that H is an observable whose value is
    time-independent, since its eigenvectors remain
    unchanged under U(?t).
  • We thus identify Hs eigenvalue (after converting
    its frequency units to energy units by
    multiplying by ?) as measuring the systems total
    energy.
  • Energy is the conserved quantity associated with
    time-independence
  • Thus, H is the operator equivalent of the
    Hamiltonian energy function!

37
Time Evolution
  • Postulate 4 (Closed) systems evolve (change
    state) over time ?t via the unitary
    transformation U(?t) given by ExpiH?t, where H
    is the Hamiltonian energy observable.
  • Note that since U is linear, a small-factor
    change in the amplitude of any particular state
    at t1 necessarily leads to only a correspondingly
    small change in the amplitude of the any state at
    t2!
  • Chaotic sensitivity to initial conditions
    requires an ensemble of initial states that are
    different enough to be distinguishable (in the
    sense we defined)
  • Indistinguishable initial states never beget
    distinguishable outcomes
  • ? True chaotic/analog computing is physically
    impossible!

(U-1 U)
38
Deriving the General Form of the Schrödinger
Equation
  • Let ?t t, and lets differentiate U(t) eiHt
    with respect to time
  • Now, apply the first and last formulas in the
    above equation to an initial state ?(0)
  • But now, U(t)?(0) ?(t), so we have
  • This key differential equation (theSchrödinger
    equation) directly tells us how any given
    instantaneous wavefunction ?(t) evolves over time
    t.
  • We can also write it more concisely in operator
    forms

or
(sign is arbitrary)
39
Schrödinger's Wave Equation
  • How do we turn this generic operator equation
    into something that tells us about particles
  • Start w. classical Hamiltonian energy
    equation H EK EP (K kinetic, P
    potential)
  • Express (nonrelativistic) EK in terms of momentum
    p EK ½mv2 p2/2m
  • Substitute H i??t and p -i??x
  • Apply to wavefunction ? over position states x

(Where ?a ? ?/?a)
40
Consistency with Hamiltons Equations
  • Recall the generic Hamiltons equations
  • Are our quantum definitions of the Hamiltonian
    energy and momentum operators consistent with
    them?
  • Almost, it seems But the belowderivation is
    most likely illegal anyway

Oops, sign iswrong!
41
Multidimensional Form
  • For a system with states given by (x,t) where t
    is a global time coordinate, and x describes N/3
    particles (p0,,pN/3-1) with masses (m0,,mN/3-1)
    in a 3-D Euclidean space, where each pi is
    located at coordinates (x3i, x3i1, x3i2), and
    where particles interact with potential energy
    function EP(x,t), the wavefunction ?(x,t) obeys
    the following (2nd-order, linear, partial)
    differential equation

42
Features of the wave equation
  • Particles momentum state p is encoded by their
    wavelength ?, as per ph/?
  • The energy of a state is given by the frequency
    f of rotation of the wavefunction in
    the complex plane Ehf.
  • By simulating this simple equation, one can
    observe basic quantum phenomena, such as
  • Interference fringes
  • Tunneling of wave packets through potential
    energy barriers
  • Demo of SCH simulator

43
Gaussian wave packet moving to the rightArray
of small sharp potential-energy barriers
44
Initial reflection/refraction of wave packet
45
A little later
46
Aimed a little higher
47
A faster-moving particle
48
Relativistic Wave Equations
  • Unfortunately, despite its many practical
    successes, the literal Schrödingers equation is
    not relativistically invariant.
  • That is, it does not retain the same form in a
    boosted frame.
  • However, solutions to the free Schrödingers
    equation (where V0) can be given a
    self-consistent relativistic interpretation.
  • Let p -i??x be relativistic momentum,
  • Let m i??t be rest mass in the particles frame
    of ref.
  • Taking the derivative along an isospatial, i.e.,
    the proper time t axis
  • Let E i??t be relativistic energy of the
    particle
  • Then, E2 p2 m2 is easily shown to be true for
    plane-wave solutions
  • Lines of constant phase angle are the isochrones
    of the moving particle.
  • And everything transforms properly to a new
    reference frame.
  • In fact, the solutions to the free Schrödingers
    equation closely correspond to solutions to the
    relativistic Klein-Gordon equation ?µ?µ
    m2f(xµ) 0.
  • This describes a free, massive scalar particle.

49
Relativistic Spacetime Wavefunction Examples
  • Here is an electron wavefunction (m0 9.1?10-28
    g) over spacetime for various velocity
    eigenstates, approaching the speed of light
  • Scale of these images
  • The width of the physical space is 16 pm (lt 1/5 H
    atom)
  • The vertical time interval shown is 54 zs (very
    short!)
  • The arrow shows the world-line of a point moving
    at the given velocity
  • The lines of constant color (phase) are the
    isochrones of the electrons rest frame
  • Notice that the angled arrows cross fewer
    isochrones than the straight one!
  • This is time dilation, seen directly in the
    electron wavefunction over spacetime!

ß 0 ? 1
ß 0.50? 0.87
ß 0.98? 0.20
ß 0.90? 0.44
t
x
50
Normal Frame Viewvs. Mixed Frame View
Rightwardmovingelectronwave-functionas
seenin fixedstandardframe(x, t)
t', x'0
t', x'0
t', x'0
t', x'0
t
x
ß 0 ? 1ß/? 1
ß 0.60? 0.80 ß/? 0.75
ß 0.98? 0.20ß/? 4.92
ß 0.90? 0.44 ß/? 2.06
Fixedelectronwave-functionfrom
aleftwardmovingmixedframe(x, t')
Something is still not quite right in the below
t, x'0
t, x'0
t, x'0
t, x'0
t'
x
Electron moving throughits internal time (t')
Electron moving mostlythru our space (x)
51
Subsection II.B.4.eCompound Quantum Systemsand
Entanglement
  • A Few Basic Concepts

52
Compound Quantum Systems
  • Let CAB be a system composed of two separate
    subsystems A,B with vector spaces A,B with bases
    ai?,bj?.
  • The state space of C is a vector space CA?B
    given by the tensor product of spaces A and B,
    with basis states labeled as aibj? ai?bj?.
  • Well formally define tensor products later on
  • E.g., if A has state ?aca0a0 ? ca1
    a1?,while B has state ?bcb0b0 ? cb1 b1?,
    thenC has state ?c ?a??b ca0cb0a0b0?
    ca0cb1a0b1? ca1cb0a1b0? ca1cb1a1b1?

(Use distributive law)
53
Entanglement
  • If the state of a compound system C can be
    expressed as a tensor product of states of two
    independent subsystems A and B, ?c ?a??b,
  • then, we say that A and B are not entangled, and
    they have definite individual states.
  • E.g. 00?01?10?11?(0?1?)?(0?1?)
  • Otherwise, A and B are entangled (quantumly
    correlated) their states are not independent.
  • E.g. 00?11?

(State has entropy 0 but mutual information 2
bits!)
54
Size of Compound State Spaces
  • Note that a system composed of many separate
    subsystems has a very large state space.
  • Say it is composed of N subsystems, each with k
    basis states
  • The compound system has kN basis states!
  • Many possible states of the compound system will
    have nonzero amplitude in all these kN basis
    states!
  • In such states, all the distinguishable basis
    states are (simultaneously) possible outcomes
  • each with some corresponding probability
  • This illustrates the many worlds nature of
    quantum mechanics.
  • And the enormous number of possible worlds
    involved.

55
After a Measurement?
  • After a system or subsystem is measured from
    outside, its state appears to collapse to exactly
    match the measured outcome
  • the amplitudes of all states perfectly
    distinguishable from states consistent w. that
    outcome drop to zero
  • states consistent with measured outcome can be
    considered to be renormalized so that their
    probs. sum to 1
  • This collapse appears nonunitary ( nonlocal)
  • However, this behavior is now explicable as the
    expected consensus phenomenon that would be
    experienced even by entities within a closed,
    perfectly unitarily-evolving world (Everett,
    Zurek).

56
Pointer States
  • For a given system interacting with a given
    environment,
  • The system-environment interactions can be
    considered measurements of a certain observable
    of the system by the environment, and vice-versa.
  • Any each observable, there are certain basis
    states that are characteristic of that
    observable.
  • These are just the eigenstates of the observable.
  • A pointer state of a system is an eigenstate of
    the system-environment interaction observable.
  • The pointer states are the inherently stable
    states.

57
Key Points to RememberAbout Quantum Mechanics
  • An abstractly-specified system may have many
    possible states not all pairs are
    distinguishable.
  • A quantum state/vector/wavefunction ? assigns a
    complex-valued amplitude ?(s) to each state s.
  • The probability of state s is ?(s)2, the square
    of ?(s)s length in the complex plane.
  • Quantum states evolve over time via unitary
    (invertible, length-preserving) transformations.

58
Subsection II.B.4.fThe Nature of Quantum
Information
  • Generalizing classical information theory
    concepts to fit quantum reality

59
Density Operators
  • For any given state ??, the probabilities of all
    the basis states si are determined by an
    Hermitian operator or matrix ? (called the
    density matrix)
  • Note that the diagonal elements ?i,i are just the
    probabilities of the basis states i.
  • The off-diagonal elements are called
    coherences.
  • They describe the quantum entanglements that
    exist between basis states.
  • The density matrix describes the state ??
    exactly!
  • It (redundantly) expresses all of the quantum
    info. in ??.

60
Mixed States
  • Suppose the only thing one knows about the true
    state of a system that it is chosen from a
    statistical ensemble or mixture of state vectors
    vi (called pure states), each with a derived
    density matrix ?i, and a probability Pi.
  • In such a situation, in which ones knowledge
    about the true state is expressed as probability
    distribution over pure states, we say the system
    is in a mixed state.
  • Such a situation turns out to be completely
    described, for all physical purposes, by simply
    the expectationvalue (weighted average) of the
    vis density matrices
  • Note Even if there were uncountably many vi
    going into the calculation, the situation remains
    fully described by O(n2) complex numbers, where n
    is the number of basis states!

61
Von Neumann Entropy
  • Suppose our probability distribution over states
    comes from the diagonal elements of some density
    matrix ?.
  • But, we will generally also have additional
    information about the state hidden in the
    coherences.
  • The off-diagonal elements of the density matrix.
  • The Shannon entropy of the distribution along the
    diagonal will generally depend on the basis used
    to index the matrix.
  • However, any density matrix can be (unitarily)
    rotated into another basis in which it is
    perfectly diagonal!
  • This means, all its off-diagonal elements are
    zero.
  • The Shannon entropy of the diagonal probability
    distribution is always minimized in the diagonal
    basis, and so this minimum is selected as being
    the true (basis-independent) entropy of the mixed
    quantum state ?.
  • It is called the von Neumann entropy.

62
V.N. entropy, more formally
  • The trace Tr M just means the sum of Ms diagonal
    elements.
  • The ln of a matrix M just denotes the inverse
    function to eM. See the logm function in
    Matlab
  • The exponential eM of a matrix M is defined via
    the Taylor-series expansion ?i0 Mi/i!

(Shannon S)
(Boltzmann S)
63
Quantum Information Subsystems
  • A density matrix for a particular subsystem may
    be obtained by tracing out the other
    subsystems.
  • Means, summing over state indices for all systems
    not selected.
  • This process discards information about any
    quantum correlations that may be present between
    the subsystems!
  • Entropies of the density matrices so obtained
    will generally sum to gt that of the original
    system. (Even if the original state was pure!)
  • Keeping this in mind, we may make these
    definitions
  • The unconditioned, reduced or marginal quantum
    entropy S(A) of subsystem A is the entropy of
    the reduced density matrix ?A.
  • The conditioned quantum entropy S(AB)
    S(AB)-S(B).
  • Note this may be negative! (In contrast to the
    classical case.)
  • The quantum mutual information I(AB)
    S(A)S(B)-S(AB).
  • As in the classical case, this measures the
    amount of quantum information that is shared
    between the subsystems
  • Each subsystem knows this much information
    about the other.

64
Tensors and Index Notation
  • For our purposes, a tensor is just a generalized
    matrix that may have more than one row and/or
    column index.
  • We can also define a tensor recursively as a
    number or a matrix of tensors.
  • Tensor signature An (r,c) tensor has r row
    indices and c column indices.
  • Convention Row indices are shown as subscripts,
    and column indices as superscripts.
  • Tensor product An (l,k) tensor T times an (n,m)
    tensor U is a (ln,km) tensor V formed from all
    products of an element of T times an element of
    U
  • Tensor trace The trace of an (r,c) tensor T with
    respect to index k (where 1 k r,c) is given
    by contracting (summing over) the kth row index
    together with the kth column index

Example a (2,2)tensor T in which all 4indices
take on values from the set 0,1
(I is the set of legal values of indices rk and
ck) ?
65
Quantum Information Example
AB AB
  • Consider the state vAB 00?11? of compound
    system AB.
  • Let ?AB vv.
  • Note that the reduced density matrices ?A ?B are
    fully classical
  • Lets look at the quantum entropies
  • The joint entropy S(AB) S(?AB) 0 bits.
  • Because vAB is a pure state.
  • The unconditioned entropy of subsystem A is S(A)
    S(?A) 1 bit.
  • The entropy of A conditioned on B is S(AB)
    S(AB) - S(A) -1 bit!
  • The mutual information I(AB) S(A) S(B) -
    S(AB) 2 bits!

00? 01? 10? 11?
66
Quantum vs. Classical Mutual Info.
  • 2 classical bit-systems have a mutual information
    of at most one bit,
  • Occurs if they are perfectly correlated, e.g.,
    00, 11
  • Each bit considered by itself appears to have 1
    bit of entropy.
  • But taken together, there is really only 1 bit
    of entropy shared between them
  • A measurement of either extracts that one bit of
    entropy,
  • Leaves it in the form of 1 bit of incompressible
    information (to the measurer).
  • The real joint entropy is 1 bit less than the
    apparent total entropy.
  • Thus, the mutual information is 1 bit.
  • But, 2 quantum bit-systems (qubits) can have a
    mutual info. of two bits!
  • Occurs in maximally entangled states, such as
    00?11?.
  • Again, each qubit considered by itself appears to
    have 1 bit of entropy.
  • But taken together, there is no entropy in this
    pure state.
  • A measurement of either qubit leaves us with no
    entropy, rather than 1 bit!
  • If done carefully see next slide.
  • The real joint entropy is thus 2 bits less than
    the apparent total entropy.
  • Thus the mutual information is (by definition) 2
    bits.
  • Both of the apparent bits of entropy vanish if
    either qubit is measured.
  • Used in a communication tech. called quantum
    superdense coding.
  • 1 qubits worth of prior entanglement between two
    parties can be used to pass 2 bits of classical
    information between them using only 1 qubit!

67
Why the Difference?
  • Scenario Entity A hasnt yet measured B and C,
    which (A knows) are initially correlated with
    each other, quantumly or classically
  • A has measured B and is now correlated with both
    B and C
  • A can use his new knowledge to uncompute
    (compress away) the bits from both B and C,
    restoring them to a standard state

OrderABC
Classical
Quantum
Knowing he is in state 0?1?, A can unitarily
rotate himself back to state 0?. Look ma, no
entropy!
A, being in a mixed state, still holds a bit of
information that is either unknown (external
view) or incompressible (As internal view), and
thus is entropy, and can never go away (by the
2nd law of thermo.).
68
Simulating the Schroedinger Wave Equation
  • A Perfectly Reversible Discrete Numerical
    Simulation Technique

69
Simulating Wave Mechanics
  • The basic problem situation
  • Given
  • A (possibly complex) initial wavefunction
    in an N-dimensional position basis,
    and
  • a (possibly complex and time-varying) potential
    energy function ,
  • a time t after (or before) t0,
  • Compute
  • Many practical physics applications...

70
The Problem with the Problem
  • An efficient technique (when possible)
  • Convert V to the corresponding Hamiltonian H.
  • Find the energy eigenstates of H.
  • Project ? onto eigenstate basis.
  • Multiply each component by .
  • Project back onto position basis.
  • Problem
  • It may be intractable to find the eigenstates!
  • We resort to numerical methods...

71
History of Reversible Schrödinger Sim.
See http//www.cise.ufl.edu/mpf/sch
  • Technique discovered by Ed Fredkin and student
    William Barton at MIT in 1975.
  • Subsequently proved by Feynman to exactly
    conserve a certain probability measure
  • Pt Rt2 It?1It1
  • 1-D simulations in C/Xlib written by Frank at MIT
    in 1996. Good behavior observed.
  • 1 2-D simulations in Java, and proof of
    stability by Motter at UF in 2000.
  • User-friendly Java GUI by Holz at UF, 2002.

(Rreal, Iimag., ttime step index)
72
Difference Equations
  • Consider any system with state x that evolves
    according to a diff. eq. that is 1st-order in
    time x f(x)
  • Discretize time to finite scale ?t, and use a
    difference equation instead x(t ?t) x(t)
    ?t f(x(t))
  • Problem Behavior not always numerically stable.
  • Errors can accumulate and grow exponentially.

73
Centered Difference Equations
  • Discretize derivatives in a symmetric fashion
  • Leads to update rules like x(t ?t) x(t ?
    ?t) 2?t f(x(t))
  • Problem States at odd- vs. even-numbered time
    steps not constrainedto stay close to each other!

2?tf
x1
g

x2
g
x3

g
x4

74
Centered Schrödinger Equation
  • Schrödingers equation for 1 particle in 1-D
  • Replace time ( also space) derivatives with
    centered differences.
  • Centered difference equation has realpart at odd
    times that depends only onimaginary part at even
    times, vice-versa.
  • Drift not an issue - real imaginaryparts
    represent different state components!

R1
g
?
I2
g
R3

g
I4
?
75
Proof of Stability
  • Technique is proved perfectly numerically stable
    convergent assuming V is 0 and ?x2/?t gt ?/m
    (an angular velocity)
  • Elements of proof
  • Lax-Richmyer equivalence convergence?stability.
  • Analyze amplitudes of Fourier-transformed basis
  • This is sufficient due to Parsevals relation
  • Use theorem (cf. Strikwerda) equating stability
    to certain conditions on the roots of an
    amplification polynomial ?(g,?), which are
    satisfied by our rule.
  • Empirically, technique looks perfectly stable
    even for more complex potential energy funcs.

76
Phenomena Observed in Model
  • Perfect reversibility
  • Wave packet momentum
  • Conservation of probability mass
  • Harmonic oscillator
  • Tunneling/reflection at potential energy barriers
  • Interference fringes
  • Diffraction

77
Interesting Features of this Model
  • Can be implemented perfectly reversibly, with
    zero asymptotic spacetime overhead
  • Every last bit is accounted for!
  • As a result, algorithm can run adiabatically,
    with power dissipation approaching zero
  • Modulo leakage frictional losses
  • Can map it to a unitary quantum algorithm
  • Direct mapping
  • Classical reversible ops only, no quantum speedup
  • Indirect (implicit) mapping
  • Simulate p particles on kd lattice sites using pd
    lg k qubits
  • Time per update step is order pd lg k instead of
    kpd
Write a Comment
User Comments (0)
About PowerShow.com