Writing pre data - PowerPoint PPT Presentation

About This Presentation
Title:

Writing pre data

Description:

Crossover interventions usually need 10 or so subjects. ... need a sample size large enough to characterize individual responses adequately. ... – PowerPoint PPT presentation

Number of Views:24
Avg rating:3.0/5.0
Slides: 14
Provided by: HOP79
Learn more at: http://www.sportsci.org
Category:

less

Transcript and Presenter's Notes

Title: Writing pre data


1
  • If you are viewing this slideshow within a
    browser window, select File/Save as from the
    toolbar and save the slideshow to your computer,
    then open it directly in PowerPoint.
  • When you open the file, use the full-screen view
    to see the information on each slide build
    sequentially.
  • For full-screen view, click on this icon at the
    lower left of your screen.
  • To go forwards, left-click or hit the space bar,
    PdDn or ? key.
  • To go backwards, hit the PgUp or ? key.
  • To exit from full-screen view, hit the Esc
    (escape) key.

2
Clinically or Practically Decisive Sample Sizes
  • Will G HopkinsSport and RecreationAUT
    UniversityAuckland NZ

General Principles Sample vs population Ethics
Effects of effect magnitude, design, validity,
reliability Approaches to Sample-Size Estimation
What others have used Statistical
significancePrecision of estimation Clinical
decisiveness
3
General Principles
  • We study an effect in a sample, but we want to
    know about the effect in the population.
  • The larger the sample, the closer we get to the
    population.
  • Too large is unethical, because it's wasteful.
  • Too small is unethical, because the effect won't
    be clear.
  • And you are less likely to get your study
    published.
  • But meta-analysis of several such studies leads
    to a clear outcome, so small-scale studies should
    be published.
  • The bigger the effect, the smaller the sample you
    need to get a clear effect.
  • So start with a smallish sample, then add more if
    necessary.
  • But this approach may overestimate the effect.

4
More General Principles
  • Sample size depends on the design.
  • Cross-sectional studies (case-control,
    correlational) usually need hundreds of subjects.
  • Controlled-trial interventions usually need
    scores of subjects.
  • Crossover interventions usually need ?10 or so
    subjects.
  • Sample size depends on the validity (for
    cross-sectional studies) and reliability (for
    trials).
  • Different approaches to estimation of sample size
    give different estimates of sample size.
  • Traditional approach 1 what others have used.
  • Traditional approach 2 statistical
    significance.
  • Newer approach acceptable precision of
    estimation.
  • Newest approach clinical decisiveness.

5
Traditional Approach 1 Use What Others Have Used
  • No-one will believe your study in isolation, no
    matter what the sample size.
  • A meta-analyst will combine your study with
    others, so...
  • You might as well use the sample size that others
    have used, because
  • If the journal editors accepted their studies,
    they should accept yours.
  • But your measurements need to be comparable to
    what others have used.
  • Example if your measure is less reliable, your
    outcome will be less clear unless you use more
    subjects.

6
Traditional Approach 2 Statistical Significance
  • You need enough subjects to "detect" (get
    statistical significance for) the smallest
    important effect most of the time.
  • You set a Type I error rate chance of detecting
    null effect (5)
  • and a Type II error rate chance of missing
    smallest effect (20)
  • or power chance of detecting smallest effect
    100-20 80.
  • Problem statistical non/significance is easy to
    misinterpret.
  • Problem this approach leads to large sample
    sizes.
  • Example 800 subjects for case-control study to
    detect a standardized (Cohen) effect size of 0.2
    or a correlation of 0.1.
  • Samples are even larger if you keep the overall
    plt0.05 for multiple effects (the problem of
    inflation of Type 1 error).
  • Smaller samples give clinically or practically
    decisive outcomes in our discipline.

7
Statistical Significance How It Works
  • The Type I error rate (5) defines a critical
    value of the statistic.
  • If observed value gt critical value, the effect is
    significant.
  • When true value smallest important value, the
    Type II error rate (20) chance of observing
    non-significant values.
  • Solve for the sample size (via the critical
    value).

8
Newer Approach Acceptable Precision of Estimation
  • Many researchers now report precision of
    estimation using confidence limits.
  • Confidence limits define a range within which the
    true value of the effect is likely to be.
  • Therefore they should justify sample size in
    terms of achieving acceptable confidence limits.
  • My rationale if you observe a zero effect, the
    range shouldn't include substantial positive (or
    beneficial) and substantial negative (or harmful)
    values.
  • Gives half traditional sample sizes, for 95
    confidence limits.
  • But why 95? 90 can be acceptable and leads to
    one-third the traditional sample size.
  • The calculations are simpleI won't explain here.
  • This approach is appropriate for studies of
    mechanisms.

9
Newest Approach Clinical Decisiveness
  • You do a study to decide whether an effect is
    clinically or practically useful or important.
  • You can make two kinds of clinical error with
    your decision
  • Type 1 you decide to use an effect that in
    reality is harmful.
  • Type 2 you decide not to use an effect that in
    reality is beneficial.
  • You need a big enough sample to keep rates of
    these errors acceptably low.
  • Acceptably low will depend on how good the
    benefit is and how bad the harm is.
  • Default 1 for Type 1 and 20 for Type 2.
  • Leads to sample sizes a bit less than one-third
    those based on statistical significance.

10
Clinical Decisiveness How It Works, Version 1
  • The Type 1 and 2 error rates are defined by a
    decision value.
  • If true value smallest harmful value, and
    observed value gt decision value, you will use the
    effect in error (rate1, say).
  • If true value smallest beneficial value, and
    observed value lt decision value, you will not use
    the effect in error (rate20, say).
  • Now solve for the sample size (and the decision
    value).

11
How it Works, Version 2
  • This approach may be easier to understand,
    because it doesn't involve "if the true value is
    the smallest worthwhile".
  • Instead, it's just "worst-case scenario is
    chances of Type 1 and 2 errors of 1 and 20
    (say), which occurs when the observed value is
    the decision value."
  • Put the observed value on the decision value.
  • Work out the chances that the true effect is
    harmful and beneficial. You want these to be 1
    and 20.
  • You need to draw a different diagram for this
    scenario.
  • Solve for the sample size (and the decision
    value).
  • This approach gives the same answer, of course.
  • Work at it until you understand it!

12
Conclusions
  • You can justify sample size using adequate
    precision or acceptable rates of clinical errors.
  • Both make more sense than sample size based on
    statistical significance and lead to smaller
    samples.
  • HOWEVER
  • These sample sizes are for the population mean
    effect.
  • If there are substantial individual responses,
    precision or clinical error rates for an
    individual will be different
  • Very unlikely may become unlikely or even
    possible.
  • Your decision for the individual will therefore
    change.
  • So you need a sample size large enough to
    characterize individual responses adequately.
  • I'm thinking about it.

13
This presentation was downloaded from
See Sportscience 10, 2006
Write a Comment
User Comments (0)
About PowerShow.com