View by Category

Loading...

PPT – Parametric PowerPoint presentation | free to download

The Adobe Flash plugin is needed to view this content

About This Presentation

Write a Comment

User Comments (0)

Transcript and Presenter's Notes

Inferential Statistics

Parametric

- Ch 5. Inferential Statistics
- Random Samples
- Estimate Population Statistics
- Correlation of Two variables
- Experimental Methods
- Standard Error of the Mean

Descriptive Statistics

- Ch 1. The Mean, The Number of Observations, the

Standard Deviation - N/Population/Parameters
- Measures of Central Tendency Median, Mode, Mean
- Measures of Variability Range, Sum of Squares,

Variance, Standard Deviation

- Ch 6. T Scores / T Curves
- Estimates of Z scores
- Computing t Scores
- Critical Values
- Degrees of Freedom

- Ch 7. Correlation
- Variable Relationships linearity, direction,

strength - Correlation Coefficient
- Scatter plots
- Best Fitting lines

- Ch 2. Frequency Distributions and Histograms
- Frequency Distributions
- Bar Graphs / Histograms
- Continuous vs Discreet Variables

Chapter 11. A variety of t tests

- Ch 12. Tukeys Significant Difference
- Testing differences in group means
- Alpha for the whole experiment
- HSD - Honestly Significant Difference

- Ch 8. Regression
- Predicting using the regression equation
- Generalizing The null hypothesis
- Degrees of freedom and statistical significance

- Ch 3. The Normal Curve
- Z scores percentiles
- Least Squares, Unbiased estimates

- Ch 9. Experimental Studies
- Independent and dependent variables
- The experimental hypothesis
- The F test and the t test

- Ch 12. Power Analysis
- Type 1 error and alpha
- Type 2 error and beta
- How many subjects do you need

- Ch 4. Translating To and From Z Scores
- Normal Scores
- Scale Scores
- Raw Scores
- Percentiles

Ch 13. Assumptions Underlying Parametric

Statistics Sample means form a normal

curveSubjects are randomly selected from the

population Homgeneity of VarianceExperimental

error is random across samples

Non Parametric

- Ch 14. Chi Square
- Nominal Data

Chapter 11- Lecture 1

- t tests single sample, repeated measures, and

two independent samples.

Conceptual overview

t and F tests 2 approaches to measuring the

distance between means

- There are two ways to tell how far apart things

are. When there are only two things, you can

directly determine their distance from each

other. If they are two scores, as they usually

are in Psychology, you simply subtract one from

the other to find their difference. That is the

approach used in the t test and its variants.

F tests

- Alternatively, when you want to describe the

distance of three or more things from each other,

the best way to index their distance from each

other is to find a central point and talk about

their average squared distance (or average

unsquared distance) from that point. - The further apart things are from each other, the

further they will be, on the average, from that

central point. That is the approach you have used

in the F test (and when treating the t test as a

special case of the F test t for one or two, F

for more.)

One way or another the two methods will yield

identical results.

- We can use either method, or a combination of the

two methods to ask the key question in this part

of the course, Are two or more means further

apart than they are likely to be when the null

hypothesis is true.

H0 Its just sampling fluctuation.

- If the only thing that makes the two means

different is random sampling fluctuation, the

means will be fairly close to the population mean

and to each other. - If an independent variable is pushing the means

apart, their distance from each other, or from

some central point, will tend to be too great to

be explained by the null hypothesis.

Generic formula for the t test

- These ideas lead to a generic formula for the t

test - t (dfW)(Actual difference between 2 means)
- (Estimated average difference

between the two means that should exist if

the H0 is correct)

Calculation and theory

- As usual, we must work on calculation and theory.
- Again well do calculation first.

The first of three types of simple t tests - the

single or one sample t test

- One sample t test
- t test in which a sample mean is compared to a

population mean. - The population mean is almost always the value

postulated by the null hypothesis. - Since it is a mean obtained from a theory (H0 is

a theory), we call that mean muT. - To do the single sample t test, we divide the

actual difference between the sample mean and muT

by the estimated standard error of the mean,

sX-bar.

Lets do a problem

- You may recognize this problem. We used it to set

up confidence intervals in Ch. 6

For example

- For example, lets say that we had a new

antidepressant drug we wanted to peddle. Before

we can do that we must show that the drug is

safe. - Drugs like ours can cause problems with body

temperature. People can get chills or fever. - We want to show that body temperature is not

effected by our new drug.

Testing a theory

- Everyone knows that normal body temperature for

healthy adults is 98.6oF. - Therefore, it would be nice if we could show that

after taking our drug, healthy adults still had

an average body temperature of 98.6oF. - So we might test a sample of 16 healthy adults,

first giving them a standard dose of our drug

and, when enough time had passed, taking their

temperature to see whether it was 98.6oF on the

average.

Heres the formula

Heres the formula

Data for the one sample t test

- We randomly select a group of 16 healthy

individuals from the population. - We administer a standard clinical dose of our new

drug for 3 days. - We carefully measure body temperature.
- RESULTS We find that the average body

temperature in our sample is 99.5oF with an

estimated standard deviation of 1.40o (s1.40). - In Chapter 7 we asked whether 99.5oF was in the

95 CI around muT It wasnt. We should get the

same result with a t test.

Heres the computation

Notice that the critical value of t changes with

the number of degrees of freedom for s, our

estimate of sigma, and must be taken from the t

table.If n 16 in a single sample, dfWn-k15.

df 1 2 3 4 5 6 7 8 .05

12.706 4.303 3.182 2.776 2.571 2.447 2.365 2.306 .

01 63.657 9.925 5.841 4.604 4.032 3.707 3.499

3.355 df 9 10 11 12 13 14 15 16 .05

2.262 2.228 2.201 2.179 2.160 2.145 2.131 2.120 .0

1 3.250 3.169 3.106 3.055 3.012 2.997 2.947 2.

921 df 17 18 19 20 21 22 23 24 .05

2.110 2.101 2.093 2.086 2.080 2.074 2.069 2.064 .0

1 2.898 2.878 2.861 2.845 2.831 2.819 2.807 2.

797 df 25 26 27 28 29 30 40 60 .05

2.060 2.056 2.052 2.048 2.045 2.042 2.021 2.000 .0

1 2.787 2.779 2.771 2.763 2.756 2.750 2.704 2.

660 df 100 200 500 1000 2000 10000 .05

1.984 1.972 1.965 1.962 1.961 1.960 .01

2.626 2.601 2.586 2.581 2.578 2.576

We have falsified the null.

- We would write the results as follows
- t (15) 2.57, plt.05
- Since we have falsified the null, we reject 98.6o

as the population mean for people who have taken

the drug. - Instead, we would predict the average person,

drawn from the same population as our sample,

would respond as did our sample. We would predict

they will have an average body temperature of

99.5o after taking the drug. That is, they would

have a slight fever.

An obvious problem with the one sample

experimental design no control group.

So, we can use a single random sample of

participants as their own controls if we measure

them two or more times. If it they are measured

twice, we can use the repeated measures t test.

Computation of the repeated measures t test

- Lets say we measured 5 moderately depressed

inpatients and rated their depression with the

Hamilton rating scale for depression. The we

treated them with CBT for 10 sessions and again

got Hamilton scores. Lower scores less

depression.

Here are pre, post and difference scores showing

post-treatment scores subtracted from

pretreatment scores

- Al scored 28 before treatment and 18 after . Bill

scored 22 before and 14 after. Carol scored 23

before and 14 after. Dora scored 38 before and 27

after. Ed scored 33 before and 21 after. - Before After Difference
- 28 18 10
- 22 14 8
- 23 14 9
- 38 27 11
- 33 21 12 Mean difference 10.00

In this case, there are 5-14 degrees of

freedom. Now we can compute the estimated

standard error of the difference scores

Now we are ready for the formula for the repeated

measures t testt equals the actual average

difference between the means minus their

difference under H0 divided by the estimated

standard error of the difference scores

Here is the computation in this case

Here is how we would write the result

- t (4) 14.14, plt.01
- In this case the means are 14.14 estimated

standard errors apart. - We wrote the results as plt.01
- But these are results so strong, they far exceed

any value in the table, even with just a few

degrees of freedom. - This antidepressant works!!!!

There are times a repeated measures design is

appropriate and times when it is not. When it

is not, we use a two sample, independent groups t

test.

The t test for two independent groups.

- You already know a formula for the two sample t

test - t (n-k) sB/s
- But now we want an alternative formula that

allows us to directly compare two means by

subtracting one from the other. - It takes a little algebra, but the formula is

pretty straight forward

Three steps to computing the t test for two

independent groupsFirst, we need to compute

the actual difference between the two means.

(Thats easy, we just subtract one from the

other.

Step 2Then we compare that difference to the

difference predicted by H0. Thats also easy

because the null, as usual, predicts no

difference between the means.H0 mu1 mu2

0.00That is, the null says that there is

actually no average difference between the means

of the two population represented by the two

groups.

Step 3 this one is a little harder. Here we

compute the standard error of the difference

between two means .Although the

population means may be identical, samples will

vary because of random sampling fluctuation.The

amount of fluctuation is determined by MSW and

the sizes of the two groups.

So we need to divide the actual difference

between the mean score at time 1 minus the mean

at time two. Then we subtract the theoretical

difference (which is 0.00 according to the

null).Finally, we divide by the estimated

standard error of the difference between the

means of two independent groups.

Lets learn the conceptual basis and computation

of the estimated standard error of the difference

between 2 sample means.

- The estimated average squared distance between a

sample mean and the population mean due solely to

sampling fluctuation is MSW /n, where n is the

size of the sample. - The estimated average squared distance between

two sample means is their two squared differences

from mu added together MSW/n1 MSW/n2.

- So, if the samples are the same size, their

average squared distance from each other equals

MSW/n1 MSW/n2 2MSW/n - But if the samples have different numbers of

scores, we have to use the average size of the

two groups.

- The problem is we cant use a usual arithmetic

average we need to use a geometric average

called the harmonic mean, nH. - Then the average squared distance between two

independent sample means equals 2MSW/nH - The square root of that is the average unsquared

difference between the mean of the two samples,

the denominator in the t test.

Here is the formula for the estimated standard

error of the difference between the means of two

independent samples

Heres the formula for the independent groups t

test

Where

So, to do that computation we need to learn to

compute nH.

Calculating the Harmonic Mean

Notice that this technique allows

different numbers of subjects in each group.

If the groups are the same size , the harmonic

and ordinary mean number of participants is the

same.

3 groups 4 subjects each

When groups do not have equal numbers, harmonic

mean is smaller than ordinary mean.

4 groups 6, 4, 8 and 4 participants. Ordinary

mean22/45.50 participants each.

The theory part

ZX-bar scores

- As you know from Chapter 4, the Z score of a

sample mean is the number of standard errors of

the mean the sample mean is from mu. Here is the

formula. - ZX-bar (X-bar - mu)/ sigmaX-bar

Confidence intervals with Z

- As you learned in Chapter 4, if a sample differs

from its population mean solely because of

sampling fluctuation, 95 of the time it will

fall somewhere in a symmetrical interval that

goes 1.96 standard errors in both directions from

mu. - That interval is, of course, the CI.95.
- CI.95 mu 1.960 sigmaX-bar
- Or, for theoretical population means
- CI.95 muT 1.960 sigmaX-bar

MUT, the CI.95 and H0.

- Most of the time we dont know mu, so we are

really talking about muT. - Most of the time, muT will be the value of mu

suggested by the null hypothesis. - If a sample falls outside the 95 confidence

interval around muT, we have to assume that it

has been pushed away from mu by some factor other

than sampling fluctuation.

ZX-bar and the null hypothesis

- If H0 says that the only reason that a sample

mean differs from muT is sampling fluctuation, as

H0 usually does, then the value of ZX-bar can be

used as a test of the null hypothesis. - If H0 is correct, ZX-bar should fall within the

CI.95, within 1.960 standard errors of muT. - If ZX-bar has an absolute value greater than

1.960, the sample mean falls outside the 95

confidence interval around mu and falsifies the

null hypothesis.

The underlying logic of the Z test

- Here is the formula for Zx-bar again.
- ZX-bar (X-bar - muT)/ sigmaX-bar
- When used as a test of the null, most text books

identify ZX-bar simply as Z. We will follow that

lead and, when we use it in a test of the null,

call Zx-bar simply Z. - Here is the formula for the Z test.
- Z (X-bar - muT)/ sigmaX-bar
- If the absolute value of Z equals or exceeds

1.960, Z is significant at .05. - If the absolute value of Z equals or exceeds

2.576, Z is significant at .01.

In the Z test

- You start with a random sample then expose it to

an IV. - You determine muT, the predicted mean if the null

hypothesis is true. - If the absolute value of Z gt 1.960, Xbar outside

the CI.95 around muT. - The null hypothesis is probably not correct.
- Since you have falsified the null, you must turn

to H1, the experimental hypothesis - Also, since Z was significant, you conclude that

were other individuals from the population

treated the same way, they would respond

similarly to the sample you studied.

There are two problems

- We seldom know sigma.
- It would be nice to have a control group.
- Lets deal with those problems one at a time.
- Well deal with the fact that we dont know sigma

- Therefore we cant compute sigmaX-bar.

The first problem

- Since we dont know sigma, we must use our best

estimate of sigma, s, the square root of MSW and

then estimate sigmaX-bar by dividing s by the

square root of n, the size of the sample. - We therefore must use the critical values of the

t distribution to determine the CI.95 and CI.99

around muT in which the null hypothesis predicts

that Xbar will fall. - The exact value will depend of degrees of freedom

for s. - Since s is the square root of MSW, dfWn-k.

t curves and degrees of freedom revisited Z

curve

F r e q u e n c y

score

Critical values of the t curves

- The following table defines t curves with
- 1 through 10,000 degrees of freedom
- Each curve is defined by how many estimated

standard deviations you must go from the mean to

define a symmetrical interval that contains a

proportions of .9500 and .9900 of the curve,

leaving proportions of .0500 and .0100 in the

two tails of the curve (combined). - Values for .9500/.0500 are shown in plain print.

Values for .9900/.0900 and the degrees of freedom

for each curve are shown in bold print.

df 1 2 3 4 5 6 7 8 .05

12.706 4.303 3.182 2.776 2.571 2.447 2.365 2.306 .

01 63.657 9.925 5.841 4.604 4.032 3.707 3.499

3.355 df 9 10 11 12 13 14 15 16 .05

2.262 2.228 2.201 2.179 2.160 2.145 2.131 2.120 .0

1 3.250 3.169 3.106 3.055 3.012 2.997 2.947 2.

921 df 17 18 19 20 21 22 23 24 .05

2.110 2.101 2.093 2.086 2.080 2.074 2.069 2.064 .0

1 2.898 2.878 2.861 2.845 2.831 2.819 2.807 2.

797 df 25 26 27 28 29 30 40 60 .05

2.060 2.056 2.052 2.048 2.045 2.042 2.021 2.000 .0

1 2.787 2.779 2.771 2.763 2.756 2.750 2.704 2.

660 df 100 200 500 1000 2000 10000 .05

1.984 1.972 1.965 1.962 1.961 1.960 .01

2.626 2.601 2.586 2.581 2.578 2.576

Estimated distance of sample means from mu the

estimated standard error of the mean

- We can compute the standard error of the mean

when we know sigma. - We just have to divide sigma by the square root

of n, the size of the sample - Similarly, we can estimate the standard error of

the mean, estimated the average unsquared

distance of sample means from mu. - We just have to divide s by the square root of n,

the size of the sample in which we are interested

Heres the formula

The one sample t test

- If the absolute value of t exceeds the critical

value at .05 in the t table, you have falsified

the null and must accept the experimental

hypothesis.

The second problem no control group.Participant

s as their own controls the repeated measures t

test

3 experimental designs First unrelated groups

- There are three basic ways to run experiments.
- The first is to create different groups each of

which contains different individuals randomly

selected from the population. You then measure

the groups once to determine whether the

differences among their means exceeds that

expected for sampling fluctuation. - Thats what weve done until now.

Second type of designrepeated measures

- The second is to create one random sample from

the population. You then treat the group in

different ways and measure that group two or more

times, once for each different way the group is

treated. - Again, you want to determine whether the

differences among the groups means taken at

different times, exceeds that expected for

sampling fluctuation.

Baseline vs. post-treatment

- If the first measurement is done before the start

of the experiment, the result will be a baseline

measurement. This allows participants to function

as their own controls. - In any event, the question is always whether the

change between conditions is larger than you

would expect from sampling fluctuation alone.

- From this point on, we look only at the

difference scores. - That is, we ignore the original pre and post

absolute scores altogether and only look at the

differences between time 1 and time 2. - Of course, our first computation is the mean and

estimated standard deviation of the differences

scores.

Here is the example we used in learning the

computation of the repeated measures t test

S ABC D E

X 10 8 9 11 12

MSW SSW/(n-k) 10.00/4 2.50

The null hypothesis in our repeated measures t

test

- Theoretically, the null can predict any

difference. - Pragmatically, the null almost always predicts

that there will be no change at all from the

first to the second measurement, that the average

difference between time 1 and time 2 will be

0.00. - Mathematically H0 muD 0.00, where muD is the

average difference score.

Does this look familiar

- We have a single set of difference scores to

compare to muT. - In the single sample t test, we compared a set of

scores to muT. - So the repeated measures t test is just like the

single sample t test. - Only this time our scores are difference scores.

To do a t test, we need the expected mean under

the null. We have that, muT0.00.

We also need the expected amount of difference

between the two means given random sampling

fluctuation.

- The expected fluctuation of the difference scores

is called the estimated standard error of the

difference scores, sD-bar. - The estimated standard error of the difference

scores the estimated standard deviation of the

difference scores divided by the square root of

the number of differences scores. - It has nD-k nD 1 degrees of freedom, where nD

is the number of difference scores. - Here is the formula for sD-bar

The repeated measures t is a version of the

single sample t testt equals the actual average

difference between the means minus their

difference under H0 divided by the estimated

standard error of the difference scores

By the way

- Repeated measures designs are the simplest form

of related measures designs, in which the each

participants in each group is related to one

participant in each of the other groups. - The simplest way for participants in one group to

be related to each other is to use the same

participants in each group. - But there are other ways. For example, each mouse

in a four condition experiment could have one

litter-mate in each of the other conditions. - But the commonest design is repeated measures,

and that is what we will study

(No Transcript)

(No Transcript)

(No Transcript)

(No Transcript)

(No Transcript)

(No Transcript)

(No Transcript)

(No Transcript)

About PowerShow.com

PowerShow.com is a leading presentation/slideshow sharing website. Whether your application is business, how-to, education, medicine, school, church, sales, marketing, online training or just for fun, PowerShow.com is a great resource. And, best of all, most of its cool features are free and easy to use.

You can use PowerShow.com to find and download example online PowerPoint ppt presentations on just about any topic you can imagine so you can learn how to improve your own slides and presentations for free. Or use it to find and download high-quality how-to PowerPoint ppt presentations with illustrated or animated slides that will teach you how to do something new, also for free. Or use it to upload your own PowerPoint slides so you can share them with your teachers, class, students, bosses, employees, customers, potential investors or the world. Or use it to create really cool photo slideshows - with 2D and 3D transitions, animation, and your choice of music - that you can share with your Facebook friends or Google+ circles. That's all free as well!

For a small fee you can get the industry's best online privacy or publicly promote your presentations and slide shows with top rankings. But aside from that it's free. We'll even convert your presentations and slide shows into the universal Flash format with all their original multimedia glory, including animation, 2D and 3D transition effects, embedded music or other audio, or even video embedded in slides. All for free. Most of the presentations and slideshows on PowerShow.com are free to view, many are even free to download. (You can choose whether to allow people to download your original PowerPoint presentations and photo slideshows for a fee or free or not at all.) Check out PowerShow.com today - for FREE. There is truly something for everyone!

You can use PowerShow.com to find and download example online PowerPoint ppt presentations on just about any topic you can imagine so you can learn how to improve your own slides and presentations for free. Or use it to find and download high-quality how-to PowerPoint ppt presentations with illustrated or animated slides that will teach you how to do something new, also for free. Or use it to upload your own PowerPoint slides so you can share them with your teachers, class, students, bosses, employees, customers, potential investors or the world. Or use it to create really cool photo slideshows - with 2D and 3D transitions, animation, and your choice of music - that you can share with your Facebook friends or Google+ circles. That's all free as well!

For a small fee you can get the industry's best online privacy or publicly promote your presentations and slide shows with top rankings. But aside from that it's free. We'll even convert your presentations and slide shows into the universal Flash format with all their original multimedia glory, including animation, 2D and 3D transition effects, embedded music or other audio, or even video embedded in slides. All for free. Most of the presentations and slideshows on PowerShow.com are free to view, many are even free to download. (You can choose whether to allow people to download your original PowerPoint presentations and photo slideshows for a fee or free or not at all.) Check out PowerShow.com today - for FREE. There is truly something for everyone!

presentations for free. Or use it to find and download high-quality how-to PowerPoint ppt presentations with illustrated or animated slides that will teach you how to do something new, also for free. Or use it to upload your own PowerPoint slides so you can share them with your teachers, class, students, bosses, employees, customers, potential investors or the world. Or use it to create really cool photo slideshows - with 2D and 3D transitions, animation, and your choice of music - that you can share with your Facebook friends or Google+ circles. That's all free as well!

For a small fee you can get the industry's best online privacy or publicly promote your presentations and slide shows with top rankings. But aside from that it's free. We'll even convert your presentations and slide shows into the universal Flash format with all their original multimedia glory, including animation, 2D and 3D transition effects, embedded music or other audio, or even video embedded in slides. All for free. Most of the presentations and slideshows on PowerShow.com are free to view, many are even free to download. (You can choose whether to allow people to download your original PowerPoint presentations and photo slideshows for a fee or free or not at all.) Check out PowerShow.com today - for FREE. There is truly something for everyone!

For a small fee you can get the industry's best online privacy or publicly promote your presentations and slide shows with top rankings. But aside from that it's free. We'll even convert your presentations and slide shows into the universal Flash format with all their original multimedia glory, including animation, 2D and 3D transition effects, embedded music or other audio, or even video embedded in slides. All for free. Most of the presentations and slideshows on PowerShow.com are free to view, many are even free to download. (You can choose whether to allow people to download your original PowerPoint presentations and photo slideshows for a fee or free or not at all.) Check out PowerShow.com today - for FREE. There is truly something for everyone!

Recommended

«

/ »

«

/ »

Promoted Presentations

Related Presentations

CrystalGraphics Sales Tel: (800) 394-0700 x 1 or Send an email

Home About Us Terms and Conditions Privacy Policy Contact Us Send Us Feedback

Copyright 2014 CrystalGraphics, Inc. — All rights Reserved. PowerShow.com is a trademark of CrystalGraphics, Inc.

Copyright 2014 CrystalGraphics, Inc. — All rights Reserved. PowerShow.com is a trademark of CrystalGraphics, Inc.

The PowerPoint PPT presentation: "Parametric" is the property of its rightful owner.

Do you have PowerPoint slides to share? If so, share your PPT presentation slides online with PowerShow.com. It's FREE!

Committed to assisting Rutgers University and other schools with their online training by sharing educational presentations for free