Chapter 7 Comparing two means - PowerPoint PPT Presentation

1 / 66
About This Presentation
Title:

Chapter 7 Comparing two means

Description:

Regression allows us to draw causal inferences in that we look at how well ... http://img.photobucket.com/albums/v290/jumper48/Tutorial/SmilytutFinal.gif ... – PowerPoint PPT presentation

Number of Views:67
Avg rating:3.0/5.0
Slides: 67
Provided by: iiMet
Category:

less

Transcript and Presenter's Notes

Title: Chapter 7 Comparing two means


1
Chapter 7Comparing two means
  • So far we have learned how we can analyze
    observed variables and find relations between
    them, as in correlation and regression.
  • Regression allows us to draw causal inferences in
    that we look at how well predictor variables
    which are supposed to be logically and temporally
    prior to an outcome variable can predict this
    outcome variable.
  • However, in regression, we have just measured the
    predictor variables without having systematic
    control over them.

2
Manipulating variables
  • Another way to create a causal relation between
    independent and dependent variable is to
    systematically manipulate independent (predictor)
    variables and assess the impact of this
    manipulation on the dependent (outcome) variable.
  • The advantage of this practice is that you as an
    experimenter know what you have done since you
    have deliberately created this manipulation. This
    gives you control over the independent variables.
    Since this manipulation is prior to testing its
    effects (assessing the dependent variable), this
    fits a causal scenario.
  • For this purpose, a common practice is to
    compare the means of two groups which have
    received a different experimental treatment.

3
Two ways of having two means
  • Between groups/subjects, Independent design
  • Each of two independent groups receives a
    different treatment
  • ? independent t-test
  • Expls
  • Within groups/subjects, repeated measures
  • Within a single group, two different treatments
    are administered, in succession
  • ? related t-test
  • Expls

4
A relevant example (!)Effect of reinforcement on
Stats students
  • Group 1 pos reinfo
  • making compliments and rewarding students
  • Group 2 neg. reinfo
  • offending students and making their lives
    miserable

? ? ?
?
5
Two kinds of variables
  • Dependent variable, DP
  • The outcome variable which I measure
  • Here success in Stats course (exam scores)?
  • Independent variable, IV
  • The variable I manipulate myself deliberately
  • Here reinforcement
  • Here the variable has 2 levels pos vs. neg
    reinforcement

Interpretation of the results ? The IV has a
causal effect on the DV
6
Two kinds of variation
  • Unsystematic Var
  • Error variance which is present in every
    measurement, even when we measure the same
    variable twice
  • Systematic Var
  • Variance due to the experimental manipulation
    between or within the groups
  • positive reinforcement tends to create high exam
    scores
  • negative reinforcement tends to create low
    scores.

Statistics assesses the amount of variation in
the data and separates the systematic from the
unsystematic variance.
7
Which kind of variation arises in which kind of
design?
  • Within subjects
  • Systematic variation due to the manipulation of
    the experimenter
  • Unsystematic variation due to any factor that
    leads to different performance of the same
    subject from time1 to time2.
  • Between subjects
  • Systematic variation due to the manipulation of
    the experimenter
  • Unsystematic variation due to the differences
    between the subjects in group 1 and group 2

The unsystematic variation in a between subjects'
design is bigger than the one in a within
subjects' design since we compare different
subjects across conditions. Therefore, it is
easier to detect a true difference between
conditions in a within subjects' design.
8
The WHAT and WHY of randomization
  • WHAT Randomization is the random allocation of
    subjects to experimental conditions
  • In a between subj design random allocation to
    condition
  • In a within subj design random order of
    condition, or counter-balancing conditions
  • WHY By randomization, we can exclude the effect
    of a lot of other variation,
  • systematic but non-experimental variation, e.g.,
    NOT always running condition 1 in the morning and
    condition 2 in the evening
  • random, unsystematic variation, e.g., how long
    subjects had slept the night before, IQ,
    individual attention span, etc.

9
Randomization as remedy
  • Within subjects
  • Practice effects
  • the subjects have gained familiarity or
    experience
  • Boredom effects subjects may already be bored by
    the first condition...
  • Remedy Counter-Balancing conditions
  • Between subjects
  • Confounding factors, such as IQ, attention span,
    etc., may affect subjects' performance
  • Remedy randomly assign subjects to groups

http//img.photobucket.com/albums/v290/jumper48/Tu
torial/SmilytutFinal.gif
10
Confidence intervals and differences between
groupsThe confidence intervals (errors bars
around a mean) of two samples can tell us whether
these two groups are taken from the same
population or not
  • CI's/error bars do not overlap
  • If the CI's of two samples do not overlap, this
    is evidence that they are taken from different
    populations, hence, there is a significant
    difference between them, induced by our
    experimental manipulation
  • CI's/error bars overlap
  • If the CI's of two samples overlap, this is
    evidence that they are taken from the same
    population, i.e., there is no difference between
    them

11
Example 'between subjects' design(using
spiderBG.sav BGbetween groups)
  • Group1 real spiders
  • Subjects have to play with a real spider
  • Group 2 pictures
  • Subjects look at pictures of spiders

Research question Do 'arachnophobes' display
more anxiety when they are confronted with real
vs. pictures of spiders?Dependent variable
Anxiety measure
www1.istockphoto.com/file_thumbview_approve/4...
12
Data file (spiderBG.sav)?
Dummy variable 'group'
  • The Dependent Var is listed in one row
  • The two groups are coded numerically (0picture
    1real spider)?
  • Graphs ? Error bars and specify variables

Group anxiety ,00 30,00 ,00
35,00 ,00 45,00 ,00 40,00
,00 50,00 ,00 35,00 ,00 55,00
,00 25,00 ,00 30,00 ,00
45,00 ,00 40,00 ,00 50,00
1,00 40,00 1,00 35,00 1,00
50,00 1,00 55,00 1,00 65,00
1,00 55,00 1,00 50,00 1,00
35,00 1,00 30,00 1,00 50,00
1,00 60,00 1,00 39,00
Dep Var
Indep Var
Tick 'summaries for groups of cases'
13
Error bar graph for spiderBG.sav
  • Visual inspection The error bars of both groups
    overlap considerably
  • ? both groups are taken from the same population
  • ? n.s. differences between the two groups
  • ? exp. manipulation was probably unsuccessful

Mean 47 CI 40-54
Mean 40 CI 34--46
14
Example 'repeated measures' design(using
spiderRM.sav RMrepeated measures)?
  • Condition 2 pictures
  • Same subjects see pictures of spiders
  • Condition 1 real spiders
  • Subjects play with real spiders
  • Research question Do 'arachnophobes' display
    more anxiety when they are confronted with real
    vs. pictures of spiders?Dependent variable
    Anxiety measure

15
Data file (spiderRM.sav)?
  • The Dependent Var is listed in two columns
  • The first column contains the anxiety scores for
    the 'picture' condition
  • The second column for the 'real spider' cond
  • Graphs ? Error bar... choose summaries of
    separate variables and define the pair of
    Anx(pict) and Anx(real)?

Anx(pict) Anx(real)? 30,00 40,00 35,00
35,00 45,00 50,00 40,00 55,00
50,00 65,00 35,00 55,00 55,00
50,00 25,00 35,00 30,00 30,00
45,00 50,00 40,00 60,00 50,00 39,00
The error graphs still overlap although error
variance has been reduced... sth is still
wrong with the data...
16
Solution Eliminating between-subject variability
in a within-subjects design1. Compute the indiv.
means
  • Attention normalize participants' means to
    eliminate the between-subject variability
    Transform ? compute (mean)?

Anx(pict) Anx(real) mean 30,00 40,00
35,00 35,00 35,00 35,00 45,00
50,00 47,50 40,00 55,00 47,50
50,00 65,00 57,50 35,00 55,00
45,00 55,00 50,00 52,50 25,00
35,00 30,00 30,00 30,00 30,00
45,00 50,00 47,50 40,00 60,00
50,00 50,00 39,00 44,50
17
2. Calculate the grand mean
  • Compute the grand mean (overall mean for the
    whole sample, across both conditions)
  • Analyze ? Descriptive Statistics ? Descriptives,
    select the variable 'mean' from the 'options'

Grand mean
18
Inter- and intra-individual variance
  • Subjects differ in their individual means across
    conditions. These differences express the
    individual differences between them as shown in
    the column 'mean'. We want to get rid of exactly
    this inter-individual variance and only retain
    the intra-individual variance within each
    subject, across the two conditions

Anx(pict) Anx(real) mean 30,00 40,00
35,00 35,00 35,00 35,00 45,00
50,00 47,50 40,00 55,00 47,50
50,00 65,00 57,50 35,00 55,00
45,00 55,00 50,00 52,50 25,00
35,00 30,00 30,00 30,00 30,00
45,00 50,00 47,50 40,00 60,00
50,00 50,00 39,00 44,50
This mean should be the same for everybody
19
3. Calculate the adjustment factor
  • The means between participants have to be
    equalized.
  • Therefore, we have to calculate an adjustment
    factor by subtracting each subjects' mean score
    from the grand mean (compute 43.5 - mean)

Anx(pict) anx(real) mean adjust
30,00 40,00 35,00 8,50 35,00
35,00 35,00 8,50 45,00 50,00
47,50 -4,00 40,00 55,00 47,50
-4,00 50,00 65,00 57,50 -14,00
35,00 55,00 45,00 -1,50 55,00
50,00 52,50 -9,00 25,00 35,00
30,00 13,50 30,00 30,00 30,00
13,50 45,00 50,00 47,50 -4,00
40,00 60,00 50,00 -6,50 50,00
39,00 44,50 -1,00 grand mean
43,50
The values in 'Adjust' capture the differences
between each subjects' individual mean of
anxiety from the grand mean of anxiety
20
4. Calculate adjusted values for each variable
(picture, real)?
  • For the picture condition We add each subjects'
    adjustment score to her score in the picture
    condition? new variable (picture2)?
  • For the real condition do the same? new variable
    (real2)?

Anx(pict) anx(real) mean adjust picture2
real2 30,00 40,00 35,00 8,50
38,50 48,50 35,00 35,00 35,00
8,50 43,50 43,50 45,00 50,00
47,50 -4,00 41,00 46,00 40,00
55,00 47,50 -4,00 36,00 51,00
50,00 65,00 57,50 -14,00 36,00
51,00 35,00 55,00 45,00 -1,50
33,50 53,50 55,00 50,00 52,50
-9,00 46,00 41,00 25,00 35,00
30,00 13,50 38,50 48,50 30,00
30,00 30,00 13,50 43,50 43,50
45,00 50,00 47,50 -4,00 41,00
46,00 40,00 60,00 50,00 -6,50
33,50 53,50 50,00 39,00 44,50
-1,00 49,00 38,00
The 2 new variables 'picture2' and 'real2'
represent the anxiety expe- rienced in each
condition, adjusted by eliminating any
between- subject differences
The 2 new variables 'picture2' and 'real2'
represent the anxiety expe- rienced in each
condition, adjusted by eliminating any
between- subject differences
21
Checking computing mean2
  • If we create the mean2 of picture2 and real2, it
    is identical for every subject ? we have proven
    that we have cancelled out individual differences
    in anxiety
  • Note mean2 the grand mean 43.50

22
New error bars for spider_RM - adjusted values
  • Now Error graphs do not overlap
  • ?the 2 conditions are really different
  • ? our exp. manipulation was successful

23
Between (BG) and within subject (RM) design for
same data
  • RM-design
  • Non-overlapping error bars ? difference in
    condition
  • BG-design
  • Overlapping error bars ? no difference between
    groups

The same data may show n.s. or significant
differences between conditions, depending on
whether you have conducted a between-subjects
study or a within-subjects study.
24
The T-TestTesting differences between 2 means
  • Independent variable
  • Only a single IV, with two values
  • Dependent variable
  • Only a single DV
  • Independent means t-test
  • Two groups, e.g.,
  • Exp. vs. Control group
  • Women vs. Men
  • High vs. Low WM span
  • Dependent means t-test/ matched pairs/ paired
    samples t-test
  • Two conditions within the same group, e.g.,
  • Pretest vs. Posttest
  • Morning vs. Evening

25
Limitations of the t-test
  • A t-test should not be applied when in your
    overall study you have more than 2 groups or
    conditions
  • Then you have to apply an Analysis of Variance
    (ANOVA)?
  • Within ANOVA you may conduct specific t-tests
    (contrasts, post-hoc tests)?
  • A t-test should be applied whenever you test no
    more than 1 difference
  • Between 2 groups that have received a different
    treatment, as in independent t-test
  • Within 1 group, between two conditions, as in
    related t-test

Note A t-test is a special case of an
(Univariate) Analysis of Variance (with just one
factor that has 2 values)?
26
Rationale of the t-testDo 2 means differ?
  • No effect (n.s.)?
  • If the two samples are drawn from the same
    population
  • e.g., when the experimental manipulation has no
    effect
  • When chance alone (SE) accounts for differences
  • ? Accept the 0-hypothesis 'No differerences'
  • Effect ()?
  • If the two samples are drawn from two different
    populations
  • e.g., when the experimental manipulation was
    effective
  • When chance alone does not account for the
    differences
  • ? Dismiss the 0-hypo / Accept the 'alternative
    hypothesis'

27
The equation in the t-test
  • t observed diff expected diff
  • between sample between pop means
  • means (if 0-hypo is true)?
  • ________________________________
  • estimate of the SE of the diff between
  • the two sample means
  • Note Not only the observed diff between the
    sample means matters but also the expected
    difference between the respective means in the
    population given that H0 is true.
  • However Since under H0, the two means are the
    same, this term cancels itself out...

28
Assumptions of the t-test
  • Both kinds of t-tests are parametric tests that
    assume
  • Normal distribution of data in the population
  • DV has been measured on Interval scale level
  • The independent t-test further assumes that
  • The variance in both groups is roughly equal
    (homogeneity of variances)?
  • Scores are independent (come from different
    people)?
  • ? These asspumptions have to be checked!

29
The dependent t-test
  • t D ? D
  • sD/ ? N
  • D mean diff between the two samples
  • ? D diff we would expect to find in the pop
  • sD/ ? N standard error of the differences, SE
  • 0-hypo no differences between the population
    means, i.e., ? D 0

30
The sampling distribution and SE
P 14f chapter 1.6
  • The sampling distribution is the frequency
    distribution of sample means from the same
    population, e.g., if you draw 100 samples and
    overlay them on each other
  • ? this yields the population mean
  • The Standard error SE is the standard deviation
    of this sampling distribution
  • ? The SE is the population SD

31
SE of mean
  • The SE (SD of the pop mean) is a measure of the
    unsystematic variation in the data. It is the
    average deviation of the mean difference between
    the two conditions
  • Expl. If all participants were the same and
    reacted the same, the SE would be 0.
  • We want to know how the difference between the
    sample means compares to the mean if we had not
    imposed an experimental condition (mean derived
    by chance alone)?
  • This we find out by dividing the mean by the SE
    of the mean. This operation standardizes the
    average and also tells us the difference between
    the samples with an experimental condition and
    the samples without an experimental condition
    (by chance alone)?

32
SE of mean - continued
  • Since we cannot draw hundreds of samples, we
    estimate the SE from the SD of the difference of
    our samples (SD) and the sample size
  • ?D SD
  • ?N
  • In words the SE (?D) of the mean differences is
    the SD divided by the square root of the sample
    size (N)?
  • SE is a measure of the unsystematic variation in
    the data

33
t-statistics
Systematic variation in the experiment
mean difference minus expected mean values in the
popu- lation (if H0 were true)?
  • In the t-statistics, we compare the amount of
    systematic variation (numerator) with the amount
    of unsystematic variation (SE, in the
    denominator).
  • The more systematic variation there is, the
    bigger the numerator, hence the greater the
    t-value. If the t-value exceeds a threshold (to
    be looked up in the t-table), it is significant!

Unsystematic variation in the experiment SE of
mean diff
34
The t-test on SPSS(using spiderRM.sav)?
Select the pair of variables you want to
compare picture of spider vs. real spider
Chose the level of significance, here .05
  • Analyze ? Compare Means?Paired-Samples t -Test

35
T-test output
Test yourself SESD/?N 9.2923/?12 9.2923/3.464
2.6827
The Pearson Correlation shows the strength of
the relation between the two conditions. The two
conditions are related, since the data stem
from the same people. Here, R is quite high, but
n.s.
36
T-statistics - continued
SE of mean diff ?D SD ?N
The true value is to be found within
these boundaries. Note the value does not cross
0!
SD of mean diff
t Mean diff/SE -7/2.8311 -2.473
D difference between mean of cond 1 and 2
(40-47-7)?
dfN-1 12-1 11
Exact Significance level
The difference between the 2 means is big enough
NOT to have been caused by chance ? The 2
conditions differ significantly
Report (t(11) -2.47, p lt.05)?
37
Compare 'picture vs. real' with 'picture2 vs.
real2' (adjusted values)?
  • You can run the paired samples t-test with the
    original 'picture' and 'real' data and also with
    the adjusted 'picture2' and 'real2' data.
  • Analyze ? Compare means ? paired samples t-test,
    form 2 pairs from the original and the adjusted
    data

Original pair picture real adjusted pair
picture2 - real2
Original pair picture real adjusted pair
picture2 - real2
38
Output for both paired t-tests
Since we have taken out the individual
differences in the means between the two
conditions, the SD and SE have been greatly
diminished and equalized between the two
conditions
Since we have taken out the individual
differences in the means between the two
conditions, the correlation between them is now
perfect.
39
T-statistics for both paired t-tests
SD of the mean difference identical for both
pairs!
SE of the mean difference identical for both pairs
  • The paired samples t-test is identical for both
    pairs
  • ? SPSS does the adjustment which we calculated
    step by step before, automatically for us.

40
Calculating the effect size
  • After knowing the t-statistics, we want to know
    whether the effect is also substantial, not just
    significant. Therefore, we calculate the effect
    size r from the t-value
  • r ? t2 ? -2.4722 ? 6.116
    .60
  • t2 df -2.472211 17.116
  • Remember any value gt.50 is a real big effect!

41
Reporting the t-statistics
  • Report the t-value, df, probability, and the
    effect size
  • On average, participants experienced
    significantly greater anxiety to real spiders (M
    47 SE 3.18) than to pictures of spiders (M
    40, SE 2.68, t(11) -2.43, p lt .05, r
    .60). (p 295)?

42
The independent t-test
  • In an independent t-test, we compare different
    participants in 2 conditions.
  • We cannot compare the two conditions within a
    single subject, but we have to do so across
    subjects. This adds variance there is the
  • variance arising from the different conditions
    and
  • the variance arising from other differences
    between the participants.

43
t-test equation (independent t-test)?
  • t (X1 X2) - ( ?1 - ?2) (7.3)?
  • estimate of SE
  • We now look at differences between the overall
    means of the sample and compare them to the
    differences we would expect in the population,
    under the 0-Hypothesis.
  • Under the 0-Hypothesis, ?1 would be equal to ??,
    hence ( ?1 - ?2) 0. So, (7.3) reduces to
  • t (X1 X2) (7.4)?
  • estimate of SE
  • ? We divide the difference between the sample
    means (numerator) by SE (denominator)?

44
Estimate of SE (denominator)?
  • How do we estimate the SE of the mean between the
    two populations (denominator)?
  • t (X1 X2) (7.4)?
  • estimate of SE

?
45
SD and variance of the sample distribution
  • According to the variance sum law, the variance
    of a difference between two independent variables
    is the sum of their variances.
  • In the same vein, the variance of the sampling
    distribution is the sum of variances of the two
    populations from which the samples were taken.

46
SD and var of the sample distribution
  • SD of sample distribution SE of the sample
  • SD of pop 1 s1
  • ??N1
  • SD of pop 2 s2
  • ??N2
  • Var of pop 1 s1 2 s21
  • ??N2 N1
  • Var of pop 2 s2 2 s22
  • ??N2 N2
  • Var of sample distrib of diff s21 s22
  • N1 N2

s SD
(SE of our sample)?
(SE of our sample)?
Variance of the differences in the population
47
SD of sampling distrib of differences (SE of our
sample)?
  • Var of sampl distrib of diff s21 s22
  • N1 N2
  • SD of sampl distrib of diff ? s21 s22
  • (SE of our sample) N1 N2
  • Equation (7.4) becomes (7.5), for equal sample
    sizes (N1 and N2)
  • t (X1 X2)
  • ___________ (7.5)?
  • ?????? ?s21 s22
  • N1 N2

SE of difference
48
(Pooled variance estimate t-test)?
  • If samples do not have equal size, the variance
    has to be weighted wrt to size, by taking into
    consideration the df (N-1)
  • weighted average s2p (n1-1)s21 (n2-1)s22
  • n1 n2 2
  • The pooled t-value is calculated as
  • t (X1 X2)
  • ? s2p s2p
  • N1 N2

Weight factors df's
49
Experimental t-value against expected t-value
  • In the independent t-test, again, we compare the
    value of t against the value we would have
    expected by chance alone.
  • If our t-value exceeds this critical t-value, we
    are confident that the differences between the
    two samples/conditions are real, i.e., that there
    is a difference in the population.

50
The independent t-test on SPSS(using spider
BG.sav)?
  • Condition/group 1 picture of spider (n12)?
  • Condition/group 2 real spider (n12)?
  • Analyze? compare means ? Independent samples
    T-Test

51
Output independent t-test spiderBG
SE SD/?n 9.29/3.462.68 11.03/3.463.18
t-test n.s. ??
Variances be- tween groups are equal (L's test is
n.s.)
If CI are crossing 0 this is a sign of
instability of the two distributions
Read this line if variances are not equal
df N1N2-2 12 12 - 222
Mean diff X1-X2 40-47 -7
Standard Error of difference
52
SD of the sampling distribution of differences
(SE of sample)?
  • ? s21 s22 ? 9.292 11.032
  • N1 N2 12 12
  • ?(7.19 10.14) ??17.33 4.16
  • t Mean diff /SE of diff -7/4.16 - 1.681

53
2-tailed and 1-tailed test
A value of t -1.681 produces an exact value of p
.107, which is n.s, given a df 22 in a
2-tailed test . If we have a directed
hypothesis, i.e., we predict that a real spider
is more threatening than a picture of one, we can
do a 1-tailed test. We simply divide the p-value
by two p .107/2 .054 This value is still
(marginally) n.s., so that we conclude ? Spider
phobes are equally anxious looking at a picture
of a spider as when dealing with a real spider.
54
Effect size
  • For determining the effect size of our t-test, we
    convert the t-statistics into a correlation r
  • r ? -1.6812 ? 2.826 .34
  • -1.6812 22 24.826
  • ? Although our effect is n.s., it is
    medium-sized (around .3).

t2
t2 df (diff)?
55
Reporting the indep t-test in words
  • On average, participants experienced greater
    anxiety to real spiders (M47, SE 3.18), than
    to pictures of spiders (M 40, SE 2.68). This
    difference was not significant t(22) -1.68,
    pgt.05 however, it did represent a medium sized
    effect r .34. (Clark, 2005,303)?

56
Between or within group design?
  • It makes a difference how you collect your data
    whether from the same or from different subjects.
  • In a within-subjects design the unsystematic
    error variance is greatly reduced since the
    differences between a single person across both
    conditions are smaller than the differences
    between different persons across both conditions.
  • Thus, detecting a difference is easier in a
    within-subjects design (paired-samples t-test)
    than in a between-subjects design (independent
    samples t-test)?

57
The independent t-test as linear regression the
general linear model
  • We can conceive of the independent t-test as a
    variant of a linear regression!
  • Andy's insight ... all statistical procedures
    are basically the same they're just more or less
    elaborate versions of the correlation
    coefficient!
  • (Field, 2005, 304)?
  • How come?

58
The independent t-test as a general linear model
  • Regression model, informal
  • (5.2) Outcomei (Modeli) errori
  • independent t-test
  • (7.6) Ai b0 b1Gi ? i
  • Anxietyi b0 b1Groupi ? i

Indep Var
Dep Var
59
The independent t-test as a general linear model
continued
  • 1. For group 0 (picture of a spider)?
  • Anxietyi b0 b1Groupi ? i
  • 40 b0 (b1 x 0)?
  • b0 40
  • 2. For group 1 (real spider)?
  • Anxietyi b0 b1Groupi ? i
  • 47 b0 (b1 x 1)?
  • 47 40 b1
  • b1 47 40
  • b1 X real Xpicture

The group variable for the picture-condition is
0 b0 is the expected outcome without a model.
Hence, condition 0 is our default model
The group variable for the real-spider- condition
is 1 b1 represents the difference between
the group means X1 and X2
60
Independent t-test as regression
  • A two-group experiment (independent t-test) can
    be represented as a regression equation in which
  • the coefficient of the independent variable (b1)
    is equal to the difference between the group
    means
  • The intercept (b0) is equal to the mean of the
    group coded as 0
  • In regression, the t-test tests whether the
    regression coefficient b1 is equal to 0
    (condition 0), i.e., it tests whether the
    difference between group means is 0.

61
The prove spiderBG.sav as t-test and regression
Constant b0 Group conditionb1
Regression coefficients
Significance of regression as well as of t-test
Significance of regression as well as of t-test
b0 Mean of condition 0 'picture' 40
b0 Mean of condition 0 'picture' 40
Standard Error (SE)? of the difference
Standard Error (SE)? of the difference
b1 b0 (b1 x 1)? Xreal X picture 47 40 7
b1 b0 (b1 x 1)? Xreal X picture 47 40 7
t-test
62
The one-sample t-test
  • There is a third kind of t-test in SPSS, the
    one-sample t-test. With this kind of test you
    test whether a given single sample has been taken
    from the population or not.
  • In order to test this you have to know the
    population mean ?, e.g., through previous
    (extensive) empirical research or through some
    theoretical calculation.
  • The sample mean?x is then tested against the
    population mean ?.

63
The one-sample t-test an example
  • Let's take the data from spiderBG.sav
  • Let's assume we knew the means for the reaction
    towards spider pictures and real spiders in the
    normal population, e.g., ?picture 30 and ?real
    40.
  • Let's test for both conditions separately,
    whether our sample means (taken from spider
    phobes) are statistically identical to the
    population means or not.
  • Therefore, first exclude group 1 and conduct a
    one-samples t-test (for the picture condition)
    and then exclude group 0 and conduct another
    one-samples t-test (for the real condition)?.
  • Data?Select cases?If condition is satisfied
    group1 (excludes group 0)?
  • If condition is satisfied group1 (excludes
    group 1)?

64
The one-sample t-test in SPSS the picture
condition (?picture 30)?
Our sample descriptive statistics
Compare Means ? One-sample t-test Specify 30 as
test value.
  • ? Our sample mean (40) is significantly
    different from the population mean ?picture 30.
  • ? The arachnophobe sample is not taken from the
    population sample (but from a different
    population)?

The one-sample t-test statistics
65
The one-sample t-test in SPSS the real
condition (?picture 40)?
Our sample descriptive statistics
Compare Means ? One-sample t-test Specify 40 as
test value.
  • ? Our sample mean (47) is significantly
    different from the population mean ?real 40.
  • ? The arachnophobe sample is not taken from the
    population sample (but from a different
    population)?

The one-sample t-test statistics
66
In case of non-normally distributed data? data
may be transformed (z-, logarithmic transfo)?
Non-parametric tests can be used
  • Non-parametric tests
  • Wilcoxon signed-rank t-test
  • Mann-Whitney- U-test
  • Wilcoxon rank-sum test
  • Parametric tests
  • Dependent t-test
  • Independent t-test
Write a Comment
User Comments (0)
About PowerShow.com