The Basics: Evidence-Based Practice for Physical Therapists and Physical Therapist Assistants, Online Training Module - PowerPoint PPT Presentation

Loading...

PPT – The Basics: Evidence-Based Practice for Physical Therapists and Physical Therapist Assistants, Online Training Module PowerPoint presentation | free to download - id: 3b778f-Y2U5Y



Loading


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation
Title:

The Basics: Evidence-Based Practice for Physical Therapists and Physical Therapist Assistants, Online Training Module

Description:

(Guyatt and Rennie, 2002) * Evidence Search: Sources of Information PEDro (Physiotherapy Evidence Database): Per the website, [PEDro] has been ... – PowerPoint PPT presentation

Number of Views:567
Avg rating:3.0/5.0
Slides: 179
Provided by: medUncEd9
Learn more at: http://www.med.unc.edu
Category:

less

Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: The Basics: Evidence-Based Practice for Physical Therapists and Physical Therapist Assistants, Online Training Module


1
The Basics Evidence-Based Practice for Physical
Therapists and Physical Therapist
Assistants,Online Training Module
  • Jessica Lambeth, MPT
  • jessicamlambeth_at_yahoo.com

2
Module Purpose The purpose of this online
training module is to share the basics of
evidence-based practice (EBP). This module
focuses on general concepts of EBP, with clinical
scenarios related to school-based physical
therapy.
3
Module Purpose, continued The main emphasis
of this module is not to promote certain
treatments or tests and measures or tell you
what the recent literature reveals about
pediatric physical therapy practices. As will
be discussed in the coming slides, the research
answer alone is not the correct answer. In
addition, EBP emphasizes the necessity of
learning to perform searches and evaluate the
material independently, related to clinicians
and patients current circumstances. While
initially time consuming, the results are
worthwhile.
4
Module Purpose, continued For those without a
recent EBP background or training, this module
will not result in immediate efficiency in
literature searches, statistical interpretation,
etc. Hopefully it will, however, encourage more
questions and motivation for EPB. It will also
enable the NC DPI to focus educational sessions
on meeting needs in the area of EBP and to gauge
the current knowledge of school-based PTs and
PTAs. Your feedback and comments will help to
plan future learning opportunities and
resources.
5
Pretest Please stop here and complete the
pretest if you have not already done so. Please
also keep track of your time, as directed in the
instructions. Also, I dont know answers
have been included to ensure that we receive
appropriate feedback. If you really do not know
the answer, in the pretest or the posttest,
please indicate that in your response. This will
greatly assist in providing necessary learning
opportunities in the future, as well as help us
to evaluate the effectiveness of the online
module format and material.

6
References The content of this EBP module is
largely based on curriculum from courses within
the transitional Doctor of Physical Therapy
program at Rocky Mountain University of Health
Professions in Provo, Utah. (web address
www.rmuohp.edu)
7
References, continued Much of the material
within this online training module can be found
in the following texts Guyatt G, Rennie D.
Users' Guides to the Medical Literature-
Essentials of Evidence-Based Clinical Practice.
Chicago AMA Press 2002.   Jaeschke R, Singer J,
Guyatt GH. Measurement of health status
ascertaining the minimal clinically important
difference. Control Clin Trials
198910407-15. Portney LG, Watkins MP.
Foundations of Clinical Research Applications to
Practice. 2nd ed. Upper Saddle River, NJ
Prentice-Hall Inc, 2000.   Rothstein JM.
Autonomous Practice or Autonomous Ignorance?
(Editorial) Physical Therapy 81(10), October
2001.
8
References, continued Much of the material
within this online training module can be found
in the following texts Straus SE, Richardson
WS, Glasziou P, Haynes RB. Evidence-based
Medicine How to Practice and Teach EBM. 3rd ed.
Edinburgh Churchill Livingstone 2005. All
clipart came from Microsoft Office http//office.
microsoft.com/en-us/clipart/default.aspx.
Accessed 05/09/2009.

9
Module Outline 1) What is EBP? Slides 9-22
2) Statistics Review Basic Research, Slides
23-115 3) How to Search for EBP, Slides
116-152 4) How to Interpret Research Related to
EBP, Slides 153-174
10
  • Section 1 Outline
  • What is Evidence-Based Practice?
  • Evidence-Based Practice General Introduction,
    10-12
  • Survey Results (NC school-based PTs view of
    EBP),13-18
  • Guiding Steps to Practice EBP,19-20
  • Two Fundamental Principles of EBP, 21
  • Best Research Evidence, 22

11
  • Evidence-Based Practice
  • General Introduction
  • EBP is the integration of the best research
    evidence, clinical expertise, and the patients
    values and circumstances
  • Best Research Evidence valid and clinically
    relevant research with a focus on
    patient-centered clinical research
  • Clinical Expertise use of clinical skills and
    experiences
  • Patients Values and Circumstances the
    patients unique preferences, concerns, and
    expectations in his or her setting
  • (Straus et al, 2005)

12
  • Evidence-Based Practice
  • General Introduction
  • EBP is Not
  • Focused only on research studies
  • Only to be used or understood by professionals
    who routinely participate in research studies
  • A discouragement from trying new treatment
  • ? There may be little or no research on a
    particular topic, or studies with small sample
    sizes may have lacked the power to demonstrate
    statistical significance (as later explained in
    the statistics section)

13
Evidence-Based Practice General
Introduction Because RCTs are so difficult, we
will always have areas that lack evidence, we
will need to find other credible research
approaches to supply evidence. Keep in mind that
an absence of evidence is different from negative
evidence. An absence of evidence is not an
excuse to ignore the growing body of data
available to guide practice. (RCT randomized
clinical or controlled trials) (Rothstein
2001)
14
  • Survey Results
  • Survey responses from North Carolina School-Based
    PTs/PTAs about EBP
  • Survey distributed at the NC Exceptional
    Childrens Conference, Nov 2008
  • 41 Respondents (www.med.unc.edu/ahs/physical/sch
    oolbasedpt for detailed results)
  • 73 had participated in a conference or course
    on the use of EBP in the last 5 years
  • Of those that participated in a course,
    respondent data suggests the course changed their
    view of EBP, but their use and practice
  • of EBP was changed to a lesser degree.

15
  • Survey Results, Continued
  • Highest frequency response as to why we should
    use EBP ? positive impact on our clinical
    practice
  • 39 of 41 respondents agreed that EBP is
    relevant and necessary for PTs in the school
    system (2 left that question blank)
  • ? Highest frequency responses as to why it is
    relevant and necessary A focus on EBP results
    in improved clinical practice and provides
    validation and justification for our role as
    school-based PTs/PTAs
  •  

16
  • Survey Results, Continued
  • When respondents were asked if they were
    comfortable searching for and using EBP
  • Yes 17 (41),
  • No 18 (44),
  • No Response 6 (15)
  • Internet was the most frequently
  • used source for searching EBP,
  • but continuing education courses
  • ranked highest for preference.

17
  • Survey Results, Continued
  • A majority of the respondents were comfortable
  • ? Determining the level of evidence and
    interpreting the authors conclusions
  • A majority of the respondents were not
    comfortable
  • ? Interpreting statistics, even though
    statistical knowledge is often helpful to
    evaluate conclusions drawn by the author

18
  • Survey Results, Continued
  • The primary barriers to searching and using EBP,
    as listed by NC school PTs/PTAs
  • Time
  • Access
  • Uncertain how to search for EBP
  • Factors that enhance the search and use of EBP,
  • as listed by NC school PTs/PTAs listed
  • Additional time
  • Education on the appropriate use of EBP
  • Education in how to access EBP resources
  •  

19
  • Survey Results, Continued
  • Questions generated from the survey
  • How to efficiently and effectively increase the
    knowledge and practice of EBP by NC school
    PTs/PTAs?
  • How to address barriers to using and searching
    for EBP?
  • How to enhance use and access of EBP in
    school-based practice?
  •  

20
  • Guiding Steps to Practice EBP
  • Analyze what we know and what we do not know, in
    relation to improving our clinical practice.
    Form answerable questions to address any gaps in
    our knowledge.
  • Search for and find the best research evidence to
    address our questions.
  • Critically appraise the information, based on its
    validity, impact or size of effect, and
    applicability.
  • (Straus et al, 2005)

21
  • Guiding Steps to Practice EBP
  • Integrate information gathered from the best
    research evidence with clinical expertise and the
    patients values and circumstances
  • Evaluate the effectiveness of any intervention
    taken based on steps 1-4, and the effectiveness
    and efficiency of the process
  • (Straus et al, 2005)
  •  
  •  

22
  • Two Fundamental Principles of EBP
  • Evidence alone is never sufficient to make a
    clinical decision (page 8)
  • Consider risks and benefits, costs,
    inconvenience, alternative treatment strategies,
    patient preferences/values and circumstances.
  • EBM posits a hierarchy of evidence to guide
    clinical decision making (page 8)
  • Not all research is equal in terms of relevance
    and statistical support, however, that does not
    mean lower level evidence is not worthwhile.
  •  (Guyatt and Rennie use the term Evidence-Based
  • Medicine, EBM)
  •  (Guyatt and Rennie, 2002)

23
  • Best Research Evidence
  • The three sections that follow will focus on
    the best research evidence branch of Straus
    three components of EBM. (Straus et al, 2005)
  • Best research evidence is not more important
    than the other two branches it is prominent in
    this module because knowledge concerning clinical
    expertise and patient values and expectations
    will vary from situation to situation.
  • Section 2 (Statistics Review, Basic Research)
    will provide the background information necessary
    to perform effective searches and interpret the
    best research evidence (Sections 3 and 4).
  •  

24
  • Section 2 Outline
  • Statistics Review, Basic Research
  • Types of Research, 25-33
  • Hierarchy of Evidence, 34-35
  • Variables, 36-44
  • Measurement Validity
  • Types, 45-48
  • Statistics, 49-67
  • Sensitivity and Specificity
  • Positive and Negative Predictive Value
  • Positive and Negative Likelihood
  • Receiver Operating Characteristic
  • (ROC) Curve
  • Responsiveness to Change
  • Effect Size versus MCID

25
  •  Section 2 Outline, continued
  • Statistics Review, Basic Research, continued
  • Measurement Reliability, 68-72
  • Descriptive Statistics,73-83
  • Frequency and Shape of Distribution
  • Central Tendency Measures
  • Measures of Variability
  • Inferential Statistics, 84-115
  • Probability
  • Sampling Error
  • Confidence Intervals
  • Hypothesis Testing
  • Power
  •  

26
  • Types of Research
  • Nonexperimental (Observational)
  • Descriptive or Exploratory
  • No control or manipulation of variables
  • Examines populations and relationships
  • Experimental
  • Researcher manipulates/controls variable(s)
  • Comparison of interventions or groups, examines
    cause and effect
  • (Portney and Watkins, 2000)

27
Types of Research Portney and Watkins suggest
viewing various designs as a continuum of
research with a descriptive, exploratory, or
experimental purpose. Certain designs may
include elements of more than one classification.
(Portney and Watkins,2000)
28
  • Descriptive Research
  • Examples
  • Case Study Description of one or more
    patients, may document unusual conditions or
    response to intervention
  • Developmental Research Examines patterns of
    growth and change, or documents natural history
    of a disease or condition
  • Normative Research Establishes typical values
    for specific variables
  • (Portney and Watkins, 2000)

29
  • Descriptive Research, continued
  • Examples
  • Qualitative Research Collection of
    subjective, narrative information (rather than
    quantitative, numerical data) in an effort to
    better understand experiences
  • Evaluation Research Assessment of a program
    or policy, often by the collection of descriptive
    information through surveys or questionnaires
  • (Portney and Watkins, 2000)

30
  • Exploratory Research
  • Examples
  • Correlational Methods Examines relationships
    between areas of interest, may be used to predict
    or suggest, but cannot offer cause and effect
  • Cohort and Case-Control Studies Used often in
    epidemiological research to describe and predict
    risks for certain conditions
  • Methodological Studies Used to evaluate the
    validity and reliability of measuring instruments
  • Historical Research Use of archives or other
    records to reconstruct the past to generate
    questions or suggest relationships of historical
    interest
  • (Portney and Watkins, 2000)

31
  • Experimental Research
  • Example
  • Randomized Clinical or Controlled Trial (RCT)
    In general, a clinical treatment, or experimental
    condition, is compared to a control condition,
    often a placebo but in some cases an alternative
    treatment, where subjects are randomly assigned
    to a group.
  • (Portney and Watkins, 2000)

32
  • Experimental Research, continued
  • Examples
  • Single-Subject Design Variation of RCT, study
    of an individual over time with repeated
    measurement and determined design phases (Portney
    and Watkins, 2000)
  • In an N1 RCT, a single individual receives
    alternating treatment and placebo or alternative
    treatment, with the patient and the assessor
    blinded to intervention allocation. Objective or
    subjective measures are then recorded during the
    allocation periods. (Guyatt and Rennie, 2002)

33
  • Experimental Research, continued
  • Examples
  • Sequential Clinical Trial Variation of RCT,
    technique that allows for the continuous analysis
    of data as it becomes available, does not require
    a fixed sample
  • Quasi-Experimental Research Comparative
    research in which subjects cannot be randomly
    assigned to a group, or control groups cannot be
    used. Lower level of evidence than RCTs.
  • (Portney and Watkins 2000)

34
  • Experimental Research, continued
  • Examples
  • Systematic Review Combination of several
    studies with the same or similar variables, in
    which the studies are summarized and analyzed
    (Guyatt and Rennie, 2002)
  • Meta-analysis Statistical combination of the
    data from several studies with the same or
    similar variables, to determine an overall
    outcome (Portney and Watkins, 2000 Guyatt and
    Rennie, 2002)

35
  • Hierarchy of Evidence for Treatment Decisions
  • Greatest (Top) to Least (Bottom)
  • N of 1 randomized controlled trial
  • Systematic review of randomized trials
  • Single randomized trial
  • Systematic review of observational studies
    addressing patient-important outcomes
  • Single observational study addressing
    patient-important outcomes
  • Physiological studies (studies of blood
    pressure, cardiac output, exercise capacity, bone
    density, and so forth)
  • Unsystematic clinical observations
  • A meta-analysis is often considered higher than
    a
  • systematic review
    (Guyatt and Rennie, 2002)
  •  

36
  • Hierarchy of Evidence
  • Ideally, evidence from individual studies would
    be compiled or synthesized into systematic
    reviews, with that information succinctly
    consolidated into easily and quickly read
    synopses. All relevant information would be
    integrated and linked to a specific patients
    circumstance. The medical search literature is
    still far from this, but working towards that
    goal. Efforts include clinical prediction
    guidelines and APTAs emphasis on EBP.
  •  
  •   (Straus et al, 2005)

37
  • Variables
  • Variables Characteristic that can be manipulated
    or observed
  • Types of Variables
  • Independent or Dependent
  • Measurement Scales/Levels
  • ?Classification is useful for communication, so
    that readers are aware of the authors hypothesis
    of what situation or intervention (independent
    variable) will predict or cause a given outcome
    (dependent variable)
  • (Portney and Watkins, 2000)

38
  • Variables Independent or Dependent
  • Independent Variable A variable that is
    manipulated or controlled by the researcher,
    presumed to cause or determine another
    (dependent) variable
  • Dependent Variable A response variable that
    is assumed to depend on or be caused by another
    (independent) variable
  • (Portney and Watkins, 2000)

39
  • Variables Measurement Scales
  • Useful to convey information to the reader
    about the type of variables observed
  • Necessary to determine what statistical
    analysis approach should be used to examine
    relationships between variables
  • From lowest to highest level of measurement,
    the scales are nominal, ordinal, interval, and
    ratio
  • (Portney and Watkins, 2000)

40
  • Variables Measurement Scales
  • Nominal Scales (Classification Scale)
  • Data, with no quantitative value, are organized
    into categories
  • Categorizes are based on some criterion
  • Categories are mutually exclusive and
    exhaustive (each piece of data will be assigned
    to only one category)
  • Only permissible mathematical operation is
    counting (such as the number of items within each
    category)
  • Examples Gender, Blood Type, Side of
    Hemiplegic Involvement
  • (Portney and Watkins, 2000)

41
  • Variables Measurement Scales
  • Ordinal Scales
  • Data are organized into categories, which are
    rank-ordered on the basis of a defined
    characteristic or property
  • Categories exhibit a greater than-less than
    relationship with each other and intervals
    between categories may not be consistent and may
    not be known
  • (Portney and Watkins, 2000)

42
  • Variables Measurement Scales
  • Ordinal Scales, continued
  • If categories are labeled with a numerical
    value, the number does not represent a quantity,
    but only a relative position within a
    distribution (for example, manual muscle test
    grades of 0-5)
  • Not appropriate to use arithmetic operations
  • Examples Pain Scales, Reported Sensation,
    Military Rank, Amount of Assistance Required
    (Independent, Minimal)
  • (Portney and Watkins, 2000)

43
  • Variables Measurement Scales
  • Interval Scales
  • Data are organized into categories, which are
    rank-ordered with known and equal intervals
    between units of measurement
  • Not related to a true zero
  • Data can be added or subtracted, but actual
    quantities and ratios cannot be interpreted, due
    to lack of a true zero
  • Examples Intelligence testing scores,
    temperature in degrees centigrade or Fahrenheit,
    calendar years in AD or BC
  • (Portney and Watkins, 2000)

44
  • Variables Measurement Scales
  • Ratio Scales
  • Interval score with an absolute zero point (so
    negative numbers are not possible)
  • All mathematical and statistical operations are
    permissible
  • Examples time, distance, age, weight
  • (Portney and Watkins, 2000)

45
Variables Clinical Example A study
investigates how a strengthening program impacts
a childs ability to independently walk. In this
case, the strengthening program is the
independent variable and the ability to
independently walk is the dependent variable.
Amount of assistance required (if ranked maximal,
moderate, minimal, independently, not based on
weight put on a crutch or other quantitative
testing) would be an example of ordinal
data.   Studies often have more than one
independent or dependent variable

46
  • Measurement Validity
  • Measurement Validity examines the extent to
    which an instrument measures what it is intended
    to measure (Portney and Watkins, 2000)
  • For example, how accurate
  • is a test or instrument at
  • discriminating, evaluating,
  • or predicting certain items?

47
  • Measurement Validity
  • Validity of Diagnostic Tests
  • Based on the ability for a test to accurately
    determine the presence or absence of a condition
  • Compare the tests results to known results,
    such as a gold standard.
  • For example, a test determining balance
    difficulties likely to result in falls could be
    compared against the number of falls an
    individual actually experiences within a certain
    time frame. A clinical test for a torn ACL could
    be compared against an MRI.
  • (Portney and Watkins, 2000)

48
  • Measurement Validity Types
  • Face Validity Examines if an instrument
    appears to measure what it is supposed to measure
    (weakest form of measurement validity)
  • Content Validity Examines if the items within
    an instrument adequately comprise the entire
    content of a given domain reported to be measured
    by the instrument
  • Construct Validity Examines if an instrument
    can measure an abstract concept
  • (Portney and Watkins, 2000)

49
  • Measurement Validity Types
  • Criterion-related Validity Examines if the
    outcomes of the instrument can be used as a
    substitute measure for an established gold
    standard test.
  • ? Concurrent Validity Examination of
    Criterion-related Validity, when the instrument
    being examined and the gold standard are compared
    at the same time
  • ? Predictive Validity Examination of
    Criterion-related Validity, when the outcome of
    the instrument being examined can be used to
    predict a future outcome determined by a gold
    standard
  • (Portney and Watkins, 2000)

50
  • Measurement Validity Statistics
  • Ways to Evaluate Usefulness of Clinical Screening
    or Diagnostic Tools
  • Sensitivity and Specificity
  • Positive and Negative Predictive Value
  • Positive and Negative Likelihood Ratios
  • Receiver Operating Characteristic (ROC) Curve
  • The above mentioned statistical procedures are
    often used when researchers are introducing (and
    validating) the test. Hopefully the values from
    these operations can be found tools testing
    manual or in articles evaluating the tools
    validity within certain populations or settings.

51
Measurement Validity Statistics Diagnostic
Reference Table Condition Present
Absent Test Result Positive Negative

(Guyatt and Rennie, 2002 Portney and Watkins,
2000 Straus et al, 2005)
52
  • Measurement Validity Statistics
  • Table Labels
  • (a) True Positive The subject matter has the
    condition, and the test accurately identifies the
    presence of the condition
  • (d) True Negative The subject matter does not
    have the condition, and the test accurately
    identifies the absence of the condition
  • (b) False Positive The subject matter does not
    have the condition, and the test incorrectly
    identifies the presence of the condition
  • (c) False Negative The subject matter has the
    condition, and the test incorrectly identifies
    the
  • absence of the condition
  • (Portney and Watkins, 2000)

53
  • Measurement Validity Statistics
  • Positive test result the test identifies the
    condition as being present
  • Negative result the test identifies the
    condition as being absent
  • (This may or may not be accurate when compared to
    the gold standard).
  • The tests sensitivity and specificity provide
    information about the accuracy of the test.
  • (Portney and Watkins, 2000)

54
  • Measurement Validity Statistics
  • Sensitivity
  • The ability of the test to obtain a positive
    test when the condition is present the ability
    to detect a true positive (a)
  • a/(a c) The proportion that test positive out
    of those with the condition
  • (Portney and Watkins, 2000)

55
  • Measurement Validity Statistics
  • Specificity
  • The ability of the test to obtain a negative
    result when the condition is absent, the ability
    to detect a true negative (d)
  • d/(b d) The proportion that test negative out
    of those without the condition
  • Sensitivity and Specificity are often provided in
    terms of percents, from 0 to 100 (low to high)
  • (Portney and Watkins, 2000)

56
  • Measurement Validity Statistics
  • ?Helpful Hints to remember
  • Sensitivity and Specificity?
  • Sensitivity SnNout When a test has a high
    sensitivity (Sn), a negative result (N), rules
    out (out) the diagnosis.
  • Specificity SpPin When a test has a high
    specificity (Sp), a positive result (P), rules in
    (in) the diagnosis
  • (Straus et al, 2005)

57
Measurement Validity Statistics Clinical
Example Example Youre choosing between two
tests that screen participation in school based
on physical abilities. A positive result means
that the students physical abilities impact his
or her participation.
58
Measurement Validity Statistics Clinical
Example One test has a high sensitivity, but a
low specificity. A high sensitivity means that a
negative test will effectively rule out students
whose physical abilities do not impact
participation. However, with a low
specificity, there may be many false positives,
meaning students may test positive for the
condition when, in fact, their abilities do not
impact participation.
59
Measurement Validity Statistics Clinical
Example Example Youre choosing between two
tests that evaluate participation in school based
on physical abilities. A positive result
means that the students physical abilities
impact his or her participation.

60
Measurement Validity Statistics Clinical
Example The other test has a low sensitivity,
but a high specificity. A high specificity means
that a positive result will effectively rule in
the condition. However, with a low
sensitivity, there may be many false negatives,
meaning that students may test negative for the
condition, when, in fact, their abilities do
impact their participation.

61
  • Measurement Validity Statistics
  • Predictive Values
  • Provided in terms of percentages, 0 to 100,
    low to high
  • Positive Predictive Value (PV)
  • Probability that a person with a positive test
    actually has the condition
  • a/(a b)
  • High PV desired for screening tools, to
    prevent excessive unnecessary future testing
  • Negative Predictive Value (PV-)
  • Probability that a person with a negative
  • test does not have the condition
  • d/(c d)
  • (Portney and Watkins, 2000)

62
  • Measurement Validity Statistics
  • Likelihood Ratios
  • Calculated from the Diagnostic Reference Table
  • Requires prior calculation of the pretest
    probability of the condition in question
  • Easy to use when familiar with the concept, but
    requires the use of a probability guide chart or
    a nomogram (chart that contains pretest
    probability and likelihood ratios, with a ruler
    connecting those two points to determine posttest
    probabilities)
  • (Guyatt and Rennie, 2002)

63
  • Measurement Validity Statistics
  • Likelihood Ratios, continued
  • Positive and negative likelihood ratios are
    calculated
  • Determines the usefulness of a diagnostic test.
    If a positive or negative result will change the
    posttest probability of having a condition to
    alter the clinician and patients course of
    action, it will be useful. If the likelihood
    ratios of the test do not result in a substantial
    change of knowledge, the test most likely will
    not be useful.
  • (Guyatt and Rennie, 2002)

64
  • Measurement Validity Statistics
  • Receiver Operating Characteristic (ROC) Curve
  • Uses sensitivity and specificity information to
    find the probability of correctly choosing
    between presence or absence of the condition
  • For example, a test with an area under the ROC
    curve of 0.80, would result in the correct
    determination of presence or absence of a
    condition 80 of the time.
  • (Portney and Watkins, 2000)

65
  • Measurement Validity Statistics
  • Responsiveness to chance statistics evaluate a
    measurement tools ability to detect change over
    time
  • For example, will administration of the test
    pre and post intervention reflect a change in
    status, if one actually occurred?
  • Evaluated by examining the change in scores in
    a pretest-posttest design, or using effect size
  • (Portney and Watkins, 2000)

66
  • Measurement Validity Statistics
  • Effect Size
  • Effect size (ES) is a measure of the amount of
    difference.
  • For example, experimental group A increased their
    score on a coordination measure by an average of
    15 points, while experimental group B increased
    their score an average of 8 points. The ES
    between groups would be 7 points, considering the
    groups were homogeneous at the start.
  • (Portney and Watkins, 2000)

67
  • Measurement Validity Statistics
  • Effect Size, continued
  • An effect size index is a converted effect
    size, a standardized measure of change, so that
    change scores across different outcome measures
    can be compared.
  • ES is often displayed as a correlation
    coefficient, r
  • Portney and Watkins note that considerations of
    ES vary based on the interpreting clinician, but
    review Cohens suggestions of scores lt0.4 as
    small (treatment had a small effect), 0.5 as
    moderate,
  • and gt0.8 as large
  • (Portney and Watkins, 2000)

68
  • Measurement Validity Statistics
  • Effect Size versus Minimal Clinically Important
    Difference
  • In addition to numerical data revealed and
    statistical significance, the clinician should
    also consider what amount of change is clinically
    meaningful, such as, how great a gain in strength
    or endurance will result in a change in function?
    This is often referred to as the minimal
    clinically important difference (MCID).
  • (Jaeschke et al 1989)

69
  • Measurement Reliability Statistics
  • Reliability examines a measurements consistency
    and freedom from error
  • Can be thought of as reproducibility or
    dependability
  • Estimate of how observed scores vary from the
    actual scores
  • (Portney and Watkins, 2000)

70
  • Measurement Reliability Statistics
  • Reliability Coefficient
  • Ratio of reliability (many different types with
    various symbol representation)
  • Range between 0.00 to 1.00
  • 0.00 no reliability
  • 1.00 perfect reliability
  • Reflection of variance, a measure of the
    differences among scores within a sample
  • (Portney and Watkins, 2000)

71
  • Measurement Reliability Statistics
  • Correlation
  • Comparison of the degree of association between
    two variables or sets of data
  • Used as a basis for many reliability
    coefficients
  • (Portney and Watkins, 2000)

72
  • Measurement Reliability Statistics
  • Test-Retest Reliability
  • Examines the consistency of the results of
    repeated test administration
  • Traditional Analysis
  • Pearson product-moment coefficient of
    correlation (for interval or ratio data)
  • Spearman rho (for ordinal data)
  • Current, sometimes considered preferred,
    Analysis
  • Intraclass correlation coefficient
  • (Portney and Watkins, 2000)

73
  • Measurement Reliability Statistics
  • Rater Reliability
  • Intrarater reliability
  • Reliability of data collection from one
    individual over two or more trials
  • Interrater reliability
  • Reliability of data collection between two or
    more raters
  • (Portney and Watkins, 2000)

74
  • Descriptive Statistics
  • Descriptive statistics are used to describe
    sample characteristics.
  • A sample is a subset of a population chosen for
    study. Since data often cannot be collected from
    an entire population, the data chosen from a
    selected sample is intended to be representative
    or an estimate of the population data.
  • Distribution Total set of scores (from a
    sample) for a particular variable, given the
    symbol, n
  • (Portney and Watkins, 2000)

75
  • Descriptive Statistics
  • Frequency and Shapes of Distribution
  • Frequency distribution The number of times
    each value, from the variable data, occurred
  • Drawing a graph of frequency distributions can
    result in shapes that characterize the
    distributions
  • Some graphs are asymmetrical, others are
    symmetrical
  • A symmetrical graph with a bell-shaped
    distribution is referred to as a normal
    distribution.
  • A skewed distribution presents asymmetrically
  • (Portney and Watkins, 2000)

76
Descriptive Statistics Normal Distributions,
when graphed according to frequency, present in
the shape of a bell with the majority of scores
falling in the middle and progressively fewer
scores at either end. It has special properties
in statistics. (Portney and Watkins,
2000)
77
  • Descriptive Statistics
  • Central Tendency Measures
  • Used to quantitatively summarize a groups
    characteristics.
  • Mode The score that occurs most frequently
  • Median The middle score in numerically ordered
    group of data. If there are an even number of
    scores, the median is the midpoint between the
    two middle scores
  • Mean The sum of a set of scores divided by the
    number of scores, n. Often referred to as the
    average of a data set.
  • (Portney and Watkins, 2000)

78
  • Descriptive Statistics
  • Measures of Variability
  • Variability is the dispersion of scores.
  • Variability is affected (qualified) by five
    characteristics
  • Range
  • Percentiles
  • Variance
  • Standard deviation
  • Coefficient of variation
  •  (Portney and Watkins, 2000)

79
Descriptive Statistics Measures of
Variability Variability, continued Range
Difference between the highest and lowest scores
in a distribution Percentiles Used to describe
a scores position within a distribution,
distribution data is often divided into
quartiles, or four equal parts Variance (Too
in-depth to describe the statistical background
for this module purpose). Reflects variation
within a set of scores, in square units.
Symbolized in sample data by s  (Portney and
Watkins, 2000)
80
  • Descriptive Statistics
  • Measures of Variability
  • Variability, continued
  • Standard Deviation Representative of the
    variability of scores surrounding the mean, in
    original units of measurement. Square root of
    variance.
  • The larger the standard deviation, the more
    spread out the variables scores are around a
    mean.
  • For example, data set (A) 8,9,10,11,12 and the
    data set (B) 4,5,10,15,16 both have a mean of 10.
    However the standard deviation for set A is
    1.58 while the standard deviation of set B is
    5.52.
  • Symbolized in sample data by s2.
  • (Portney and Watkins, 2000)

81
Descriptive Statistics Measures of
Variability Variability, continued Coefficient
of Variation The ratio of the standard deviation
to the mean. (Portney and Watkins,
2000)
82
  • Descriptive Statistics
  • Distributions
  • Normal Versus Skewed (Non-normal) Distribution
  • These theoretical shapes of distribution help
    determine what statistical formulas or measures
    should be used
  • The characteristics of normally distributed
    data are constant and predictable. For
    statistical purposes, the normally distributed
    curve is often divided into proportional areas,
    each equal to one standard deviation.
  • Data should be examined for goodness-of-fit
    to see if the sample approximates the normal
    distribution.
  • (Portney and Watkins, 2000)

83
  • Descriptive Statistics
  • Distributions
  • Normal Distribution Statistics
  • 1st standard deviation on either side of the
    average contains 34.13 of the data
  • total of 68.62 of the data will be between 1
    and -1 standard deviation
  • Between 1st and 2nd standard deviation contains
    13.59 of the data
  • total of 95.45 of the data will be between 2
    and -2 standard deviations
  • (13.59 times 2) (34.13 times 2)
  • (Portney and Watkins, 2000)

84
  • Descriptive Statistics
  • Distributions
  • Normal Distribution Statistics, continued
  • Between 2nd and 3rd standard deviation contains
    2.14 of the data
  • total of 99.73 of the data will be between 3
    and -3 standard deviations
  • (13.59 times 2) (34.13 times 2) (2.14 times
    2)
  • (Portney and Watkins, 2000)

85
  • Inferential Statistics
  • Estimate population characteristics from sample
    data
  • Used often when testing theories about the
    effects of experimental treatments
  • Requires that assumptions are made about how
    well the sample represents the larger population
  • Assumptions are based on the statistical
    concepts of probability and sampling error
  • It is important the sample be representative of
    the population, so that the results of
    interventions on samples can be applied to the
    entire population of individuals with those
    characteristics.
  • (Portney and Watkins, 2000)

86
  • Inferential Statistics
  • Probability
  • Probability
  • The likelihood that an event will occur, given
    all the possible outcomes. Used often in
    prediction. Given the symbol p
  • Probability may range from p 1 (certain the
    event will occur, 100 probability) to p 0
    (certain that the event will not occur, 0
    probability)
  • (Portney and Watkins, 2000)

87
Inferential Statistics Probability Probability,
continued Reflective of should happen in the
long run, not necessarily what will happen on a
given trial. For example, if a treatment has a
60 chance of success, then 60 of people will
likely be successfully treated. That does not
mean the treatment will be 60 successful in an
individual, likely it will either be a
unsuccessful or successful for that
individual. (Portney and Watkins,
2000)
88
  • Inferential Statistics Probability
  • Clinical Example
  • Probability statistics can be applied to the
    distribution of scores
  • Example, for a normally distributed data set
  • Average long jump for a certain group of
    children is 35 inches with a standard deviation
    of 4 inches. Suppose you want to know the
    probability that a child will jump within one
    standard deviation (from 31 to 39 inches)? You
    know that within one standard deviation on either
    side of the mean is 68.2, so that is the
    probability that a child will jump within one
    standard deviation of the mean.

89
  • Inferential Statistics Probability
  • Clinical Example
  • Example, continued
  • If you wanted to know the probability that a
    child will jump more than one standard deviation
    above the mean (greater than 39 inches), you can
    refer to the data and calculate 15.86.
  • Charts and graphs are available to calculate the
    data in between the standard deviations.

90
  • Inferential Statistics
  • Sampling Error
  • Sampling Error
  • The difference between sample values and
    population values
  • The lower the sampling error, the greater the
    confidence that the sample values are similar to
    the population values.
  • To estimate sampling error, the standard error
    of the mean statistic is used (too complex to
    explain statistical basis in this format)
  • The larger the sample size, n, the smaller the
    standard error of the mean
  • (Portney and Watkins, 2000)

91
  • Inferential Statistics
  • Confidence Intervals
  • Confidence Interval (CI)
  • Range of scores with specific boundaries, or
    confidence limits, that should contain the
    population mean
  • The boundaries of CIs are based on the sample
    mean and its standard error
  • Degree of confidence is expressed as a
    percentage
  • Often, researchers use 95 as a boundary, which
    is just slightly less than 2 standard deviations
    on either side of the mean
  • (Portney and Watkins, 2000)

92
Inferential Statistics, Confidence Interval
Clinical Example A physical therapy treatment
program resulted in the ability for 40 children
with a certain disorder to walk an additional 8
independent steps, on average, within a certain
set of parameters. The 95 CI for this data
was 2 steps. Therefore, we can be 95 certain
that the population mean, or average for all
children with this disorder, is between 6 and 10
extra independent steps. Said another way, if
there were an additional 100 children with the
same condition, 95 of them would likely have an
average that was between an additional 6 to 10
additional independent steps following the
physical therapy treatment.
93
Inferential Statistics Hypothesis
Testing Hypothesis Testing Used to decide if
effects from an intervention are due to chance or
the result of the intervention Results in one
of two decisions To accept or reject the null
hypothesis (Portney and Watkins,
2000)
94
Inferential Statistics Hypothesis
Testing Hypothesis Testing, continued
Statistical Hypothesis (also known as the null
hypothesis) Any observed difference (as in
pretreatment to post-treatment or treatment
compared to a placebo) is due to chance. When
the null hypothesis is rejected, the researcher
concludes that the effect of treatment is not
likely due to chance. (Portney and Watkins,
2000)
95
Inferential Statistics Hypothesis
Testing Hypothesis Testing, continued Alternati
ve Hypothesis Any observed difference is not due
to chance. Often the researcher is trying to
support the alternative hypothesis, as when
trying to prove that one particular treatment is
better. Sometimes, however, the researcher may
be trying to prove that certain interventions are
equal. (Portney and Watkins 2000)
96
  • Inferential Statistics
  • Hypothesis Testing, Errors
  • Errors in Hypothesis Testing
  • Decision to accept or reject the null
    hypothesis is based on the results of the
    statistical procedures on collected data from
    samples.
  • Decisions are based on sample data, so it is
    possible that the results obtained are not
    accurate of population data.
  • There is a chance for error, that the
    researcher may incorrectly accept or reject the
    null hypothesis
  • (Portney and Watkins 2000)

97
  • Inferential Statistics
  • Hypothesis Testing, Errors
  • Type I error (a) Rejecting the null hypothesis
    when it is true (for example, deciding that the
    difference seen between a treatment group and a
    control group is due to the effect of the
    treatment, when, in fact, the difference is due
    to chance).
  • A commonly used standard is a 0.05, or the
    researchers accept a 5 chance of making a Type I
    error
  • Statistical tests completed with the sample data
    are used to calculate p, the probability that an
    observed difference did occur by chance.
  • (Portney and Watkins, 2000)

98
Inferential Statistics Hypothesis Testing, p and
a Hypothesis Testing Relationship between p
and a If p is greater than the chosen a, then
the researchers chose not to reject the null
hypothesis For example, in a placebo versus
treatment study, the researchers cannot conclude
that the experimental treatment had a different
effect then the placebo. (Portney and
Watkins, 2000)
99
Inferential Statistics Hypothesis Testing, p and
a Hypothesis Testing Relationship between p
and a, continued If p is less than the chosen
a, then the researchers chose to reject the null
hypothesis For example, in a placebo versus
treatment study, the researchers conclude that
the experimental treatment had a different effect
then the placebo. (Portney and Watkins,
2000)
100
Inferential Statistics Hypothesis Testing, p and
a Hypothesis Testing Relationship between p
and a, continued Confidence intervals
surrounding the p value can be calculated,
hopefully these are included in data analysis
section of the research study. The CIs between
two groups should not (rare exceptions) overlap,
if a statistically significant difference is
found. (Portney and
Watkins, 2000)
101
Inferential Statistics Hypothesis, CIs versus
MCID Hypothesis Testing CIs (slide 90) versus
MCID (slide 67) When the null hypothesis is
rejected, the researchers conclude that the
experimental treatment (versus a placebo, for
example) had a statistically significant effect.
The clinician should examine the effect size
and the MCID to ensure that the change is
clinically and functionally relevant. (Guyatt
and Rennie, 2002)
102
  • Inferential Statistics
  • Hypothesis, CIs versus MCID
  • When a null hypothesis is not rejected, the
    researchers may conclude that the experimental
    treatment did not have an effect.
  • However, the researchers should pay close
    attention to the confidence intervals.
  • If the CI does not include the MCID, then the
    trial is most likely negative.
  • If the CI includes the MCID, then the
    possibility that the experimental treatment may
    have a positive effect cannot be ruled out. The
    researchers may wish to run a power analysis,
    explained in later slides.
  • (Guyatt and Rennie, 2002)

103
  • Inferential Statistics, Hypothesis Testing
  • Clinical Example
  • Children with similar abilities/diagnosis
    (homogenous sample) are randomly assigned to two
    different groups
  • Group A receives a physical therapy designed to
    improve gross motor skills, and
  • Group B completes typical daily activities (but
    dont worry, due to ethical concerns children in
    Group B will receive the same treatment as those
    in Group A at the end of the study period).

104
Inferential Statistics, Hypothesis
Testing Clinical Example 1, continued The
outcome measure will be a tool that tests gross
motor activities with a final, numerical outcome
score. The authors hypothesis that the
experimental group will show statistically
significant gains (experimental hypothesis) and
that the null hypothesis (there is no difference
between groups at the end of the intervention
period) will be rejected.
105
Inferential Statistics, Hypothesis
Testing Clinical Example 1, continued Initially
, group A and B have similar average pre-test
scores (if not, that can be statistically
corrected). Now suppose that Group A
(experimental group, receiving additional PT
services) increases an average of 9 points from
pretest to posttest, with a CI of 1, 8 to 10.
Group B increases an average of 0.5 points,
with a CI of 2, -1.5 to 3.5.
106
Inferential Statistics, Hypothesis
Testing Clinical Example 1, continued These
confidence intervals do not overlap, which
corresponds with the statistical analysis
performed that plt0.05 (the predetermined a), and
there is a statistically significant difference
between groups. The null hypothesis is
rejected.
107
Inferential Statistics, Hypothesis
Testing Clinical Example 1, continued Prior to
the experiment, the authors agreed that the MCID
was 6. Both the statistically determined
(plt0.05 ) and clinically determined criteria in
support of the experimental groups treatment
were met (ES of experimental group,9, was greater
than the MCID, 6).
108
Inferential Statistics, Hypothesis Testing
Clinical Example 2 Initially, group A and B
have similar average pre-test scores (if not,
that can be statistically corrected). Now,
change the scenario from previous slides.
Suppose that Group A (experimental group,
receiving additional PT services) increases an
average of 5 points from pretest to posttest,
with a CI of 2, 3 to 7. Group B increases an
average of 0.5 point,s CI 2, -1.5 to 3.5.

109
Inferential Statistics, Hypothesis Testing
Clinical Example 2, continued These
confidence intervals overlap, and the statistical
analysis performed revealed that pgt0.05 (the
predetermined a). There is not a statistically
significant difference between groups. The
null hypothesis is not rejected.
110
Inferential Statistics, Hypothesis Testing
Clinical Example 2, continued However, prior
to the experiment, the authors agreed that the
MCID was 6. Since the CI range of Group A
includes that MCID, the beneficial effects of the
treatment from Group A over Group B cannot be
ruled out. The researchers may want to run a
statistical power analysis, to determine if the
study was underpowered, and therefore unlikely to
show a difference of the treatment.

111
Inferential Statistics Hypothesis Testing, Errors
Errors in Hypothesis Testing, Type II Type II
error (ß) Not rejecting the null hypothesis
when it is false (for example, determining that
differences are due to chance, when they are, in
fact, due to the experimental treatment) (P
ortney and Watkins, 2000)
112
  • Inferential Statistics
  • Hypothesis Testing, Errors
  • Errors in Hypothesis Testing, Type II
  • The complement of Type II error is the
    statistical power of the test (1-ß)
  • Power the probability that a test will lead to
    rejection of the null hypothesis, or the
    probability of obtaining statistical significance
    if the differences are not due to chance
  • Many researchers use a standard of ß0.20, or a
    power of 80, as reasonable protection against
    Type II error
  • (Portney and Watkins,2000)

113
Inferential Statistics Power Determinants of
Statistical Power Even though the researchers
may not reject the null hypothesis, it does not
always mean that an experimental treatment is not
effective. The power of the study may have
been too low or small to detect a significant
difference. (Guyatt and Rennie 2002,
Portney and Watkins, 2000)
114
  • Inferential Statistics Power
  • Determinants of Statistical Power
  • Levels set of a and ß
  • Variance
  • Sample Size (n)
  • Effect Size (Difference between two treatments
    or variables, Treatments with large changes or
    correlations are more likely to produce
    significant outcomes)
  • Increases of effect size, sample size, and alpha
    all increase power, while decreases in variance
    increases power.
  • (Portney and Watkins, 2000)

115
  • Inferential Statistics Tests
  • Parametric Vs. Nonparametric Tests
  • Parametric statistics are statistics used to
    estimate population parameters
  • Primary assumptions of parametric tests
  • Samples randomly drawn from populations with
    normal distributions
  • Variances in samples are roughly equal
  • Data are measured on interval or ratio scales
  • (Portney and Watkins, 2000)

116
Inferential Statistics Tests Parametric Vs.
Nonparametric Tests Nonparametric tests are
generally not as powerful, and researchers may
choose to use parametric tests, despite not
meeting all generally held assumptions (such as
use of parametric test in a study with ordinal
data) (Portney and Watkins,
2000)
117
  •  Section 3 Outline
  • How to Search for EBP
  • Formation of Clinical Questions used to search
    for EBP, 117-121
  • Evidence Search Sources of Information, 122-138
  • Internet/World Wide Web
  • Textbooks
  • Specific Journal Subscriptions
  • Internet Sources for Medical Information
  • The Guide
  • Search Strategies, 139-152

118
  • Formation of Clinical Questions
  • Used to Search for EBP
  • Background Questions general knowledge about a
    condition/area of interest
  • Foreground Questions specific knowledge used to
    inform clinical decisions or actions. Often what
    researches use when investigating a particular
    tr
About PowerShow.com