Non-orthogonal regressors: concepts and consequences - PowerPoint PPT Presentation

1 / 40
About This Presentation
Title:

Non-orthogonal regressors: concepts and consequences

Description:

Title: PowerPoint Presentation Last modified by: Peter Smittenaar Created Date: 1/1/1601 12:00:00 AM Document presentation format: On-screen Show Other titles – PowerPoint PPT presentation

Number of Views:42
Avg rating:3.0/5.0
Slides: 41
Provided by: petersmit
Category:

less

Transcript and Presenter's Notes

Title: Non-orthogonal regressors: concepts and consequences


1
Non-orthogonal regressors concepts and
consequences
2
overview
  • Problem of non-orthogonal regressors
  • Concepts orthogonality and uncorrelatedness
  • SPM (1st level)
  • covariance matrix
  • detrending
  • how to deal with correlated regressors
  • Example

3
design matrix
regressors
Scan number
  • Each column in your design matrix represents 1)
    events of interest or 2) a measure that may
    confound your results. Column regressor
  • The optimal linear combination of all these
    columns attempts to explain as much variance in
    your dependent variable (the BOLD signal) as
    possible

4
error



??1
?2
Time
e
x1
x2
BOLD signal
Source spm course 2010, Stephan http//www.fil.io
n.ucl.ac.uk/spm/course/slides10-zurich/
5
  • The betas are estimated on a voxel-by-voxel
    basis
  • high beta means regressor explains much of BOLD
    signals variance (i.e. strongly covaries with
    signal)

6
Problem of non-orthogonal regressors
Y
total variance in BOLD signal
7
Orthogonal regressors
every regressor explains unique part of the
variance in the BOLD signal
8
Orthogonal regressors
There is only 1 optimal linear combination of
both regressors to explain as much variance as
possible. Assigned betas will be as large as
possible, stats using these betas will have
optimal power
9
non-orthogonal regressors
Y
X2
X1


Regressor 1 2 are not orthogonal. Part of the
explained variance can be accounted for by both
regressors and is assigned to neither. Therefore,
betas for both regressors will be suboptimal
10
Entirely non-orthogonal
Y
X2
X1


Betas cant be estimated. Variance can not be
assigned to one or the other
11
It is always simpler to have orthogonal
regressors and therefore designs. (spm course
2010)
12
orthogonality
Regressors can be seen as vectors in
n-dimensional space, where n number of
scans. Suppose now n 2 r1 r2 --------------- 1
2 2 1
13
orthogonality
  • Two vectors are orthogonal if raw vectors have
  • inner product 0
  • angle between vectors 90
  • cosine of angle 0
  • Inner product
  • r1 r2 (1 2) (2 1) 4
  • ? acos(4 / (r1 r2) about 35 degrees

35
14
orthogonality
  • Orthogonalizing one vector wrt another it
    matters which vector you choose! (Gram-Schmidt
    orthogonalization)
  • Orthogonalize r1 wrt r2
  • u1 r1 projr2(r1)
  • u1 1 2 (r1 r2)/(r2 r2)
  • u1 -0.6 1.2
  • Inner product
  • u1 r2 (-0.6 2) (1.2 1) 0

15
orthogonality uncorrelatedness
An aside on these two concepts
  • Orthogonal is defined as XY 0
  • (inner product of two raw vectors 0)
  • Uncorrelated is defined as (X mean(X))(Y
    mean(Y)) 0 (inner product of two detrended
    vectors 0)
  • Vectors can be orthogonal while being correlated,
    and vice versa!

16
  • please read Rodgers et al. (1984) Linearly
    independent, orthogonal and uncorrelated
    variables. The American Statistician, 38133-134.
    Will be in the FAM folder as well

Orthogonal because Inner product 15 -51
31 -13 0
17
  • please read Rodgers et al. (1984) Linearly
    independent, orthogonal and uncorrelated
    variables. The American Statistician, 38133-134.
    Will be in the FAM folder as well

Detrend Mean(X) -0.5 Mean(Y)
2.5 X_det Y_det 1.5 2.5 -4.5 -1.5 3.5 -1.5 -
0.5 0.5 Mean(X_det)
0 Mean(Y_det) 0 Inner
product 5 Orthogonal, but correlated!
3.75 6.75 -5.25 -0.25
18
r1_det r2_det -0.9 0.5 0.9 -0.5
r1 r2 -0.6 2 1.2 1
r1
detrend
r2
1
2
19
orthogonality uncorrelatedness
  • Q So should my regressors be uncorrelated or
    orthogonal?
  • A When building your SPM.mat (i.e. running your
    jobfile) all regressors are detrended (except the
    grand mean scaling regressor). This is why
    orthogonal and uncorrelated are both used when
    talking about regressors
  • update it is unclear whether all regressors are
    detrended when building an SPM.mat. This seems to
    be the case, but recent SPM mailing list activity
    suggests detrending might not take place in
    versions newer than SPM99.
  • Donders batch?

effectively there has been a change between
SPM99 and SPM2 such that regressors were
mean-centered in SPM99 but they are not any more
(this is regressed out by the constant term
anyway). Link
20
Your regressors correlate
  • Despite scrupulous design, your regressors likely
    still correlate to some extent
  • This causes beta estimates to be lower than they
    could be
  • You can see correlations using review ? SPM.mat ?
    Design ? design orthogonality

21
(No Transcript)
22
For detrended data, the cosine of the angle
(black 1, white 0) between two regressors is
the same as the correlation r ! orthogonal
vectors cos(90) 0 r 0 r2 0 correlated
vector cos(81) 0.16 r 0.16 r2
0.0256 r2 indicates how much variance is common
between the two vectors (2.56 in this example).
Note -1 r 1 and 0 r2 1
23
  • Correlated regressors variance from single
    regressor to shared




24
  • Correlated regressors variance from single
    regressor to shared
  • t-test uses beta, determined by amount of
    variance explained by single regressor.




25
  • Correlated regressors variance from single
    regressor to shared
  • t-test uses beta, determined by amount of
    variance explained by single regressor.
  • Large shared variance low statistical power




26
  • Correlated regressors variance from single
    regressor to shared
  • t-test uses beta, determined by amount of
    variance explained by single regressor.
  • Large shared variance low statistical power
  • Not necessarily a problem if you do not intend to
    test these two regressors!




Movement regressor 1
Movement regressor 2
27
How to deal with correlated regressors?
  • Strong correlations between regressors are not
    necessarily a problem. What is relevant is
    correlation between contrasts of interest
    relative to the rest of the design matrix
  • Example lights on vs lights off. If movement
    regressors correlate with these conditions
    (contrast of interest not orthogonal to rest of
    design matrix), there is a problem.
  • If nuisance regressors only correlate with each
    other, no problem!
  • Grand mean scaling is not centered around 0 (i.e.
    not detrended), these correlations are not
    informative

28
(No Transcript)
29
How to deal with correlations between contrast
and rest of design matrix?
  • Orthogonalize regressor A wrt regressor B all
    shared variance will now be assigned to B.

30
orthogonality
31
orthogonality
r1
r2
1
2
32
How to deal with correlations between contrast
and rest of design matrix?
  • Orthogonalize regressor A wrt regressor B all
    shared variance will now be assigned to B.
  • Only permissible given a priori reason to do
    this hardly ever the case

33
How to deal with correlations between contrast
and rest of design matrix?
  • do an F-test to test overall significance of your
    model. For example, to see if adding a regressor
    will significantly improve your model. Shared
    variance is taken along to determine significance
    then.
  • In the case where a number of regressors
    represent the same manipulation (e.g. switch
    activity, convolved with different hrfs) you can
    serially orthogonalize the regressors before
    estimating betas.

34
Example how not to do it
  • 2 types of trials gain and loss

Voon et al. (2010) Mechanisms underlying
dopamine-mediated reward bias in compulsive
behaviors. Neuron
35
Example how not to do it
  • 4 regressors
  • Gain predicted outcome
  • Positive prediction error (gain trials)
  • Loss predicted outcome
  • Negative prediction error (loss trials)

Highly correlated!
Highly correlated!
Voon et al. (2010) Mechanisms underlying
dopamine-mediated reward bias in compulsive
behaviors. Neuron
36
Example how not to do it
  • Performed 6 separate analyses (GLMs)
  • Shared variance is attributed to single regressor
    in all GLMs
  • Amazing! Similar patterns of activation!

Voon et al. (2010) Mechanisms underlying
dopamine-mediated reward bias in compulsive
behaviors. Neuron
37
Take home messages
  • If regressors correlate, explained variance in
    your BOLD signal will be assigned to neither,
    which reduces power on t-tests
  • If you orthogonalize regressor A with respect to
    regressor B, values of A will be changed and A
    will have equal uniquely explained variance. B,
    the unchanged variable, will come to explain all
    variance shared by A and B. However, dont do
    this unless you have a valid reason.
  • Orthogonality and uncorrelatedness are only the
    same thing if your data is centered around 0
    (detrended, spm_detrend)
  • SPM does (NOT?) detrend your regressors the
    moment you go from job.mat to SPM.mat

38
Interesting reads
  • http//imaging.mrc-cbu.cam.ac.uk/imaging/DesignEff
    iciencyhead-525685650466f8a27531975efb2196bdc90fc
    419
  • Combines SPM book and Rik Hensons own attempt at
    explaining design efficiency and the issue of
    correlated regressors.
  • Rodgers et al. (1984) Linearly independent,
    orthogonal and uncorrelated variables. The
    American Statistician, 38133-134
  • 15-minute read that describes three basic
    concepts in statistics/algebra

39
regressors
40
Same vectors, but detrended x y -3 3 0 -6 3 3
Raw vectors
x y 3 6 6 -3 9 6
Inner product 54 Non-orthogonal
inner product 0 uncorrelated
? But! ?
Write a Comment
User Comments (0)
About PowerShow.com