Practical Significance Chapter 7 FBS - PowerPoint PPT Presentation

1 / 9
About This Presentation
Title:

Practical Significance Chapter 7 FBS

Description:

Research hypothesis: what we want or expect to happen. ... CI are just null hypothesis statistical significance tests in a different guise. ... – PowerPoint PPT presentation

Number of Views:21
Avg rating:3.0/5.0
Slides: 10
Provided by: gpro
Category:

less

Transcript and Presenter's Notes

Title: Practical Significance Chapter 7 FBS


1
Practical Significance Chapter 7 - FBS
  • Presented by
  • Cynthia Vira
  • Guillermo Pro

2
Significance, What does it mean?
  • Research hypothesis what we want or expect to
    happen.
  • Remember that Null hypothesis the hypotheses
    that are actually tested in inferential
    statistics predicting zero differences or
    relationship.
  • NHSST The Null hypothesis statistical
    significance testing is the inferential
    statistics process of making decisions about
    whether to reject or not reject the null
    hypothesis.
  • Once we have our data We can make a mistake!!
    It is always conceivable that we can draw an
    unusual, fluky sample from a population where the
    null is true, and the an aberrant sample is so
    unlikely that we erroneously decide to reject our
    null hypothesis. Because we are using sample
    data rather than population data, all we can do
    is attempt to minimize (but never eliminate)
    erroneous decisions.
  • What do the authors say? The indices of
    Practical significance should be reported and
    interpreted. The Task Force recommendations
    affected the reporting expectations reported in
    the revised APA (2001) Publication Manual, used
    by more than 1,000 journals, although it has been
    argued that the Manual should have gone further
    in promulgating expectations for changed
    reporting and interpretation practices.

3
Effect Sizes
  • What is an Effect size? It is a statistic
    quantifying the extent to which sample statistics
    diverge from the null hypothesis.
  • The APA Task Force on Statistical Inference urged
    authors to Always provide some effect-size
    estimate when reporting a p value.
  • Furthermore, Reporting and interpreting effect
    sizes in the context of previously reported
    effects is essential to good research.
  • Failure to report effect sizes A defect in the
    design and reporting of research.
  • Textbooks still emphasize NHSST over effect sizes
    (Capraro Capraro, 2002)
  • P values The p value is a random variable that
    varies from sample to sample, consequently, it is
    not appropriate to compare the p values from two
    distinct experiments, or from tests on two
    variables measured in the same experiment, and
    declare that one is more significant than the
    other.
  • Three classes of effects
  • a. uncorrected, standardized difference effect
    sizes,
  • B. Uncorrected, variance-accounted for effect
    sizes
  • C. Corrected, variance-accounted-for effect
    sizes

4
Uncorrected, Standardized Difference Effect
SizesCorrected, Variance-Accounted-for Affect
Sizes
  • Remember Statistical significance is not a
    viable vehicle for testing result replicability.
  • When comparing results across studies using
    different metrics, statisticians use two popular
    standardized effect sizes, i.e. Glass , and
    Cohens d. They do this by dividing the mean
    difference of the experimental and the control
    groups by some estimate of the sigma outcome
    variable.
  • Uncorrected effect sizes for various research
    designs can also be computed as analogs of r2.

5
Corrected, variance-accounted for Effect Sizes
  • Corrected all samples have degrees of flukiness.
    The only way to eliminate flukiness is to
    collect population rather than sample data. Some
    of the score variance, SD2, in the sample data
    reflects true variance that exists in the
    population but some of the SD2 in the sample does
    not exist in the population and is called
    sampling error variance.
  • Sampling error variance inflates effect sizes.
    This is because r2, or Spearman p2 or R2 do not
    take into consideration the fact that some of the
    sample score variance does not exist in the
    population and is instead peculiar to the given
    sample.

6
Interpretation of Effect Sizes
  • Interpreting Effect sizes with rigidity (example
    of Cohens small, medium, and large, would
    merely be being stupid in another metric.
  • There is no wisdom whatsoever in attempting to
    associate regions of the effect-size metric with
    descriptive adjectives such as small, medium,
    large and the like.
  • What may be small to a sociologist may be
    appraised a medium to a clinical psychologist.

7
Confidence Intervals
  • Investigators would be misled less often if they
    used confidence intervals to base their estimated
    values of a hypothesis rather than using a null
    hypothesis and all other possible alternatives.
  • Empirical studies of journals show that
    confidence intervals are published very
    infrequently.

8
Three Misconceptions about Confidence intervals
  • 1. Researchers Consciously or unconsciously
    interpret confidence intervals constructed using
    large confidence levels (90-99) Being close to
    100 never means that it is true 100 of the
    time.
  • 2. Some researchers erroneously believe that CI
    are just null hypothesis statistical significance
    tests in a different guise. You can construct CI
    completely absent of a null hypothesis, but you
    cannot do NHSST without a null hypothesis.
  • 3. Some researchers interpret a CI in a given
    study as if they were X certain that their
    particular single interval captured the
    population parameter. But the certainty level
    involved in constructing a given sample CI
    applies to the construction of infinitely many
    CIs drawn from a population, and not to the
    single CI constructed in a single sample.

9
Confidence Intervals for Effect Sizes
  • If effect sizes are essential, and confidence
    intervals are the best reporting strategy, then
    the marriage of CIs and effect sizes seems
    appealing.
  • The problem with computing a confidence interval
    for an effect size is that a different formula
    for computing and effect CI has to be used for
    each of the infinitely many values of the effect
    size.
Write a Comment
User Comments (0)
About PowerShow.com