Measures of Central Tendency - PowerPoint PPT Presentation

1 / 29
About This Presentation
Title:

Measures of Central Tendency

Description:

They can accurately estimate population parameters using sample statistics. ... are testing for differences with variables at the nominal or ordinal level, the ... – PowerPoint PPT presentation

Number of Views:62
Avg rating:3.0/5.0
Slides: 30
Provided by: nipissingu
Category:

less

Transcript and Presenter's Notes

Title: Measures of Central Tendency


1
Measures of Central Tendency
  • A way to summarize a variable is to identify the
    most typical score.
  • Measures of Central Tendency are different
    typical scores and represent values around
    which others concentrate.

2
The Mode
  • The mode is the value with the highest frequency
    count.
  • While the mode can be useful, it is very
    informative as it only tells us which value
    occurred the most frequently.

3
The Median
  • The median that marks the midpoint. ½ of the
    values exceed the median and ½ of the values are
    below the median.
  • The median is better than the mode in summarizing
    a variable, but tells us little about the
    distribution of values.

4
The Mean
  • The mean incorporates all of the values for a
    variable! Called the average, it is obtained
    by adding up all the values dividing that sum
    by the total number of cases.
  • Usually the most accurate, stable and useful
    measure of central tendency.it is vulnerable to
    distortions by extremely high or low values.

5
Measures of variability
  • A quick measure of variability is the range. To
    calculate the range (R) you subtract the lowest
    value (L) for a variable from the highest value
    (H).
  • Although better than nothing, it is based only on
    the two most extreme variable scores.and ignores
    all the other information about the dispersion in
    the data.
  • The variance reflects the sum of deviations of
    each value from the mean and provide us with an
    average amount of dispersion or variability.
    The standard deviation is the square root of the
    variance.

6
(No Transcript)
7
From descriptive to inferential statistics.
  • The normal distribution and z-scores connect
    descriptive statistics to inferential statistics.
  • The ND adds to the interpretation of the mean and
    standard deviation, and is the basis for
    statistical estimation, hypothesis testing, and
    measures of association.

8
(No Transcript)
9
Why is the normal curve so important in
statistics?
  • A large number of real world variables are
    normally distributed.
  • The sampling distribution of several statistics
    are normally distributed.
  • The above theorems are why researchers can make
    accurate inferences about populations from the
    analysis of samples. They can accurately
    estimate population parameters using sample
    statistics.

10
Key aspects of the normal curve
  • The normal curve is symmetrical or bell-shaped.
  • The mean is also the most frequently occurring
    value (the mode), and the value that splits the
    distribution in half (the median).
  • Assuming a variable is normally distributed in
    the population, we can say much more about the
    standard deviation.

11
(No Transcript)
12
From Statistical Estimation to Confidence
Intervals.
  • Probability theory states that in repeated
    sampling, the distribution of sample means will
    be normally distributed with a mean equal to the
    true population mean.
  • Thus, we can define a range of values within
    which the true population mean is likely to be.
  • And we can estimate the probability that our
    range includes the population meanwhen we do
    this we have calculated a confidence interval.

13
(No Transcript)
14
(No Transcript)
15
(No Transcript)
16
(No Transcript)
17
(No Transcript)
18
Hypothesis Testing t-tests
  • If we are interested in seeing if the means or
    proportions of a variable differ between two
    groups, the t-ratio or t-test is our statistic of
    choice.

19
(No Transcript)
20
(No Transcript)
21
Hypothesis Testing ANOVA
  • If you are asked to look for significant
    differences in the means of three or more groups
    your method of choice is analysis of variance
    (ANOVA).
  • SSt Total Sum of Squares Total Variation in
    the data.
  • SSb Between Group Sum of Squares Between
    group variation in the data.
  • SSw Within Group Sum of Squares Within group
    variation in the data.
  • The F-ratio represents the size of the
    differences between the groups we are comparing
    relative to the size of the differences within
    the groups we are comparing.

22
(No Transcript)
23
Hypothesis Testing Chi-square
  • When you are testing for differences with
    variables at the nominal or ordinal level, the
    Chi-square statistic is appropriate.
  • Chi-square tests if the difference between
    observed frequencies (fo) and expected
    frequencies (fe) is so large it cannot be due to
    chance or random sampling error.

24
(No Transcript)
25
(No Transcript)
26
(No Transcript)
27
(No Transcript)
28
(No Transcript)
29
CORRELATION REGRESSION
  • The higher the correlation between X and Y, the
    better our predictions of Y using X will be.
  • The value r² measures how much better our
    predictions of Y using X are.if we square
    Pearsons r .85, we obtain r² .72. This is
    the coefficient of determination and means that
    if we use X to predict Y we will improve the
    accuracy of our predictions by 72.
  • The coefficient of non-determination is 1- r²,
    and it is the percentage of variability in Y not
    explained by X (due to other causal variables).
Write a Comment
User Comments (0)
About PowerShow.com